Mar 18 07:04:05.077552 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 18 07:04:05.077578 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 18 07:04:05.077588 kernel: BIOS-provided physical RAM map: Mar 18 07:04:05.077596 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 18 07:04:05.077603 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 18 07:04:05.077613 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 18 07:04:05.077621 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 18 07:04:05.077629 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 18 07:04:05.077636 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 18 07:04:05.077643 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 18 07:04:05.077650 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 18 07:04:05.077657 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 18 07:04:05.077665 kernel: NX (Execute Disable) protection: active Mar 18 07:04:05.077674 kernel: APIC: Static calls initialized Mar 18 07:04:05.077683 kernel: SMBIOS 3.0.0 present. Mar 18 07:04:05.077691 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 18 07:04:05.077698 kernel: Hypervisor detected: KVM Mar 18 07:04:05.077706 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 18 07:04:05.077713 kernel: kvm-clock: using sched offset of 3464527748 cycles Mar 18 07:04:05.077723 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 18 07:04:05.077731 kernel: tsc: Detected 1996.249 MHz processor Mar 18 07:04:05.077739 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 18 07:04:05.077748 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 18 07:04:05.077756 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 18 07:04:05.077764 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 18 07:04:05.077772 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 18 07:04:05.077780 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 18 07:04:05.077787 kernel: ACPI: Early table checksum verification disabled Mar 18 07:04:05.077797 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 18 07:04:05.079430 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 07:04:05.079497 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 07:04:05.079520 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 07:04:05.079541 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 18 07:04:05.079562 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 07:04:05.079582 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 07:04:05.079603 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 18 07:04:05.079639 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 18 07:04:05.079659 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 18 07:04:05.079679 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 18 07:04:05.079699 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 18 07:04:05.079728 kernel: No NUMA configuration found Mar 18 07:04:05.079749 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 18 07:04:05.079770 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Mar 18 07:04:05.079796 kernel: Zone ranges: Mar 18 07:04:05.079870 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 18 07:04:05.079919 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 18 07:04:05.079941 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 18 07:04:05.079961 kernel: Movable zone start for each node Mar 18 07:04:05.079982 kernel: Early memory node ranges Mar 18 07:04:05.080003 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 18 07:04:05.080023 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 18 07:04:05.080050 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 18 07:04:05.080071 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 18 07:04:05.080092 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 18 07:04:05.080113 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 18 07:04:05.080134 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 18 07:04:05.080155 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 18 07:04:05.080176 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 18 07:04:05.080196 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 18 07:04:05.080217 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 18 07:04:05.080243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 18 07:04:05.080264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 18 07:04:05.080284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 18 07:04:05.080305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 18 07:04:05.080326 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 18 07:04:05.080346 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 18 07:04:05.080367 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 18 07:04:05.080388 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 18 07:04:05.080408 kernel: Booting paravirtualized kernel on KVM Mar 18 07:04:05.080434 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 18 07:04:05.080455 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 18 07:04:05.080476 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 18 07:04:05.080497 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 18 07:04:05.080517 kernel: pcpu-alloc: [0] 0 1 Mar 18 07:04:05.080538 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 18 07:04:05.080564 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 18 07:04:05.080586 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 18 07:04:05.080611 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 18 07:04:05.080633 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 18 07:04:05.080653 kernel: Fallback order for Node 0: 0 Mar 18 07:04:05.080674 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 18 07:04:05.080694 kernel: Policy zone: Normal Mar 18 07:04:05.080716 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 18 07:04:05.080736 kernel: software IO TLB: area num 2. Mar 18 07:04:05.080758 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 227308K reserved, 0K cma-reserved) Mar 18 07:04:05.080779 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 18 07:04:05.080805 kernel: ftrace: allocating 37938 entries in 149 pages Mar 18 07:04:05.081977 kernel: ftrace: allocated 149 pages with 4 groups Mar 18 07:04:05.082000 kernel: Dynamic Preempt: voluntary Mar 18 07:04:05.082021 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 18 07:04:05.082044 kernel: rcu: RCU event tracing is enabled. Mar 18 07:04:05.082065 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 18 07:04:05.082086 kernel: Trampoline variant of Tasks RCU enabled. Mar 18 07:04:05.082107 kernel: Rude variant of Tasks RCU enabled. Mar 18 07:04:05.082127 kernel: Tracing variant of Tasks RCU enabled. Mar 18 07:04:05.082158 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 18 07:04:05.082178 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 18 07:04:05.082199 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 18 07:04:05.082220 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 18 07:04:05.082240 kernel: Console: colour VGA+ 80x25 Mar 18 07:04:05.082261 kernel: printk: console [tty0] enabled Mar 18 07:04:05.082282 kernel: printk: console [ttyS0] enabled Mar 18 07:04:05.082302 kernel: ACPI: Core revision 20230628 Mar 18 07:04:05.082323 kernel: APIC: Switch to symmetric I/O mode setup Mar 18 07:04:05.082343 kernel: x2apic enabled Mar 18 07:04:05.082369 kernel: APIC: Switched APIC routing to: physical x2apic Mar 18 07:04:05.082390 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 18 07:04:05.082410 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 18 07:04:05.082431 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 18 07:04:05.082452 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 18 07:04:05.082473 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 18 07:04:05.082494 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 18 07:04:05.082515 kernel: Spectre V2 : Mitigation: Retpolines Mar 18 07:04:05.082536 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 18 07:04:05.082561 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 18 07:04:05.082581 kernel: Speculative Store Bypass: Vulnerable Mar 18 07:04:05.082602 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 18 07:04:05.082623 kernel: Freeing SMP alternatives memory: 32K Mar 18 07:04:05.082659 kernel: pid_max: default: 32768 minimum: 301 Mar 18 07:04:05.082685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 18 07:04:05.082707 kernel: landlock: Up and running. Mar 18 07:04:05.082729 kernel: SELinux: Initializing. Mar 18 07:04:05.082751 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 18 07:04:05.082773 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 18 07:04:05.082795 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 18 07:04:05.083913 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 18 07:04:05.083942 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 18 07:04:05.083965 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 18 07:04:05.083987 kernel: Performance Events: AMD PMU driver. Mar 18 07:04:05.084008 kernel: ... version: 0 Mar 18 07:04:05.084040 kernel: ... bit width: 48 Mar 18 07:04:05.084062 kernel: ... generic registers: 4 Mar 18 07:04:05.084078 kernel: ... value mask: 0000ffffffffffff Mar 18 07:04:05.084095 kernel: ... max period: 00007fffffffffff Mar 18 07:04:05.084111 kernel: ... fixed-purpose events: 0 Mar 18 07:04:05.084127 kernel: ... event mask: 000000000000000f Mar 18 07:04:05.084143 kernel: signal: max sigframe size: 1440 Mar 18 07:04:05.084159 kernel: rcu: Hierarchical SRCU implementation. Mar 18 07:04:05.084176 kernel: rcu: Max phase no-delay instances is 400. Mar 18 07:04:05.084196 kernel: smp: Bringing up secondary CPUs ... Mar 18 07:04:05.084213 kernel: smpboot: x86: Booting SMP configuration: Mar 18 07:04:05.084229 kernel: .... node #0, CPUs: #1 Mar 18 07:04:05.084245 kernel: smp: Brought up 1 node, 2 CPUs Mar 18 07:04:05.084261 kernel: smpboot: Max logical packages: 2 Mar 18 07:04:05.084278 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 18 07:04:05.084294 kernel: devtmpfs: initialized Mar 18 07:04:05.084311 kernel: x86/mm: Memory block size: 128MB Mar 18 07:04:05.084327 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 18 07:04:05.084344 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 18 07:04:05.084364 kernel: pinctrl core: initialized pinctrl subsystem Mar 18 07:04:05.084380 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 18 07:04:05.084396 kernel: audit: initializing netlink subsys (disabled) Mar 18 07:04:05.084413 kernel: audit: type=2000 audit(1742281444.304:1): state=initialized audit_enabled=0 res=1 Mar 18 07:04:05.084429 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 18 07:04:05.084445 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 18 07:04:05.084461 kernel: cpuidle: using governor menu Mar 18 07:04:05.084477 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 18 07:04:05.084497 kernel: dca service started, version 1.12.1 Mar 18 07:04:05.084513 kernel: PCI: Using configuration type 1 for base access Mar 18 07:04:05.084530 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 18 07:04:05.084546 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 18 07:04:05.084563 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 18 07:04:05.084579 kernel: ACPI: Added _OSI(Module Device) Mar 18 07:04:05.084595 kernel: ACPI: Added _OSI(Processor Device) Mar 18 07:04:05.084611 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 18 07:04:05.084627 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 18 07:04:05.084644 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 18 07:04:05.084664 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 18 07:04:05.084680 kernel: ACPI: Interpreter enabled Mar 18 07:04:05.084696 kernel: ACPI: PM: (supports S0 S3 S5) Mar 18 07:04:05.084712 kernel: ACPI: Using IOAPIC for interrupt routing Mar 18 07:04:05.084729 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 18 07:04:05.084745 kernel: PCI: Using E820 reservations for host bridge windows Mar 18 07:04:05.084762 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 18 07:04:05.084778 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 18 07:04:05.087121 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 18 07:04:05.087326 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 18 07:04:05.087499 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 18 07:04:05.087524 kernel: acpiphp: Slot [3] registered Mar 18 07:04:05.087541 kernel: acpiphp: Slot [4] registered Mar 18 07:04:05.087558 kernel: acpiphp: Slot [5] registered Mar 18 07:04:05.087574 kernel: acpiphp: Slot [6] registered Mar 18 07:04:05.087590 kernel: acpiphp: Slot [7] registered Mar 18 07:04:05.087612 kernel: acpiphp: Slot [8] registered Mar 18 07:04:05.087628 kernel: acpiphp: Slot [9] registered Mar 18 07:04:05.087644 kernel: acpiphp: Slot [10] registered Mar 18 07:04:05.087660 kernel: acpiphp: Slot [11] registered Mar 18 07:04:05.087676 kernel: acpiphp: Slot [12] registered Mar 18 07:04:05.087692 kernel: acpiphp: Slot [13] registered Mar 18 07:04:05.087708 kernel: acpiphp: Slot [14] registered Mar 18 07:04:05.087724 kernel: acpiphp: Slot [15] registered Mar 18 07:04:05.087740 kernel: acpiphp: Slot [16] registered Mar 18 07:04:05.087759 kernel: acpiphp: Slot [17] registered Mar 18 07:04:05.087775 kernel: acpiphp: Slot [18] registered Mar 18 07:04:05.087791 kernel: acpiphp: Slot [19] registered Mar 18 07:04:05.088204 kernel: acpiphp: Slot [20] registered Mar 18 07:04:05.088220 kernel: acpiphp: Slot [21] registered Mar 18 07:04:05.088229 kernel: acpiphp: Slot [22] registered Mar 18 07:04:05.088237 kernel: acpiphp: Slot [23] registered Mar 18 07:04:05.088246 kernel: acpiphp: Slot [24] registered Mar 18 07:04:05.088255 kernel: acpiphp: Slot [25] registered Mar 18 07:04:05.088263 kernel: acpiphp: Slot [26] registered Mar 18 07:04:05.088275 kernel: acpiphp: Slot [27] registered Mar 18 07:04:05.088284 kernel: acpiphp: Slot [28] registered Mar 18 07:04:05.088293 kernel: acpiphp: Slot [29] registered Mar 18 07:04:05.088301 kernel: acpiphp: Slot [30] registered Mar 18 07:04:05.088310 kernel: acpiphp: Slot [31] registered Mar 18 07:04:05.088318 kernel: PCI host bridge to bus 0000:00 Mar 18 07:04:05.088423 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 18 07:04:05.088509 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 18 07:04:05.088597 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 18 07:04:05.088679 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 18 07:04:05.088760 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 18 07:04:05.090150 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 18 07:04:05.090267 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 18 07:04:05.090372 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 18 07:04:05.090481 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 18 07:04:05.090575 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 18 07:04:05.090666 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 18 07:04:05.090757 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 18 07:04:05.090887 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 18 07:04:05.090982 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 18 07:04:05.091105 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 18 07:04:05.091204 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 18 07:04:05.091295 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 18 07:04:05.091394 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 18 07:04:05.091486 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 18 07:04:05.091577 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 18 07:04:05.091667 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 18 07:04:05.091759 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 18 07:04:05.092909 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 18 07:04:05.093012 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 18 07:04:05.093103 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 18 07:04:05.093193 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 18 07:04:05.093283 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 18 07:04:05.093372 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 18 07:04:05.093471 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 18 07:04:05.093568 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 18 07:04:05.093657 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 18 07:04:05.093748 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 18 07:04:05.096592 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 18 07:04:05.096689 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 18 07:04:05.096778 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 18 07:04:05.096903 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 18 07:04:05.097000 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 18 07:04:05.097089 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 18 07:04:05.097177 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 18 07:04:05.097191 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 18 07:04:05.097200 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 18 07:04:05.097210 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 18 07:04:05.097219 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 18 07:04:05.097231 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 18 07:04:05.097241 kernel: iommu: Default domain type: Translated Mar 18 07:04:05.097250 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 18 07:04:05.097259 kernel: PCI: Using ACPI for IRQ routing Mar 18 07:04:05.097268 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 18 07:04:05.097277 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 18 07:04:05.097286 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 18 07:04:05.097374 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 18 07:04:05.097464 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 18 07:04:05.097558 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 18 07:04:05.097571 kernel: vgaarb: loaded Mar 18 07:04:05.097580 kernel: clocksource: Switched to clocksource kvm-clock Mar 18 07:04:05.097589 kernel: VFS: Disk quotas dquot_6.6.0 Mar 18 07:04:05.097598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 18 07:04:05.097607 kernel: pnp: PnP ACPI init Mar 18 07:04:05.097697 kernel: pnp 00:03: [dma 2] Mar 18 07:04:05.097712 kernel: pnp: PnP ACPI: found 5 devices Mar 18 07:04:05.097721 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 18 07:04:05.097734 kernel: NET: Registered PF_INET protocol family Mar 18 07:04:05.097743 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 18 07:04:05.097752 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 18 07:04:05.097761 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 18 07:04:05.097770 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 18 07:04:05.097779 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 18 07:04:05.097788 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 18 07:04:05.097797 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 18 07:04:05.097860 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 18 07:04:05.097870 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 18 07:04:05.097879 kernel: NET: Registered PF_XDP protocol family Mar 18 07:04:05.097966 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 18 07:04:05.098045 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 18 07:04:05.098125 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 18 07:04:05.098203 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 18 07:04:05.098281 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 18 07:04:05.098374 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 18 07:04:05.098471 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 18 07:04:05.098484 kernel: PCI: CLS 0 bytes, default 64 Mar 18 07:04:05.098493 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 18 07:04:05.098503 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 18 07:04:05.098512 kernel: Initialise system trusted keyrings Mar 18 07:04:05.098521 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 18 07:04:05.098530 kernel: Key type asymmetric registered Mar 18 07:04:05.098539 kernel: Asymmetric key parser 'x509' registered Mar 18 07:04:05.098551 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 18 07:04:05.098560 kernel: io scheduler mq-deadline registered Mar 18 07:04:05.098569 kernel: io scheduler kyber registered Mar 18 07:04:05.098578 kernel: io scheduler bfq registered Mar 18 07:04:05.098587 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 18 07:04:05.098596 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 18 07:04:05.098605 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 18 07:04:05.098614 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 18 07:04:05.098624 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 18 07:04:05.098635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 18 07:04:05.098644 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 18 07:04:05.098653 kernel: random: crng init done Mar 18 07:04:05.098661 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 18 07:04:05.098670 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 18 07:04:05.098679 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 18 07:04:05.098775 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 18 07:04:05.098790 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 18 07:04:05.098898 kernel: rtc_cmos 00:04: registered as rtc0 Mar 18 07:04:05.098985 kernel: rtc_cmos 00:04: setting system clock to 2025-03-18T07:04:04 UTC (1742281444) Mar 18 07:04:05.099189 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 18 07:04:05.099203 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 18 07:04:05.099212 kernel: NET: Registered PF_INET6 protocol family Mar 18 07:04:05.099221 kernel: Segment Routing with IPv6 Mar 18 07:04:05.099230 kernel: In-situ OAM (IOAM) with IPv6 Mar 18 07:04:05.099239 kernel: NET: Registered PF_PACKET protocol family Mar 18 07:04:05.099248 kernel: Key type dns_resolver registered Mar 18 07:04:05.099260 kernel: IPI shorthand broadcast: enabled Mar 18 07:04:05.099269 kernel: sched_clock: Marking stable (1002007137, 170553380)->(1198832029, -26271512) Mar 18 07:04:05.099278 kernel: registered taskstats version 1 Mar 18 07:04:05.099287 kernel: Loading compiled-in X.509 certificates Mar 18 07:04:05.099296 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 18 07:04:05.099305 kernel: Key type .fscrypt registered Mar 18 07:04:05.099313 kernel: Key type fscrypt-provisioning registered Mar 18 07:04:05.099322 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 18 07:04:05.099331 kernel: ima: Allocated hash algorithm: sha1 Mar 18 07:04:05.099342 kernel: ima: No architecture policies found Mar 18 07:04:05.099350 kernel: clk: Disabling unused clocks Mar 18 07:04:05.099359 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 18 07:04:05.099368 kernel: Write protecting the kernel read-only data: 36864k Mar 18 07:04:05.099377 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 18 07:04:05.099386 kernel: Run /init as init process Mar 18 07:04:05.099394 kernel: with arguments: Mar 18 07:04:05.099403 kernel: /init Mar 18 07:04:05.099412 kernel: with environment: Mar 18 07:04:05.099423 kernel: HOME=/ Mar 18 07:04:05.099431 kernel: TERM=linux Mar 18 07:04:05.099440 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 18 07:04:05.099452 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 18 07:04:05.099464 systemd[1]: Detected virtualization kvm. Mar 18 07:04:05.099473 systemd[1]: Detected architecture x86-64. Mar 18 07:04:05.099483 systemd[1]: Running in initrd. Mar 18 07:04:05.099494 systemd[1]: No hostname configured, using default hostname. Mar 18 07:04:05.099503 systemd[1]: Hostname set to . Mar 18 07:04:05.099513 systemd[1]: Initializing machine ID from VM UUID. Mar 18 07:04:05.099522 systemd[1]: Queued start job for default target initrd.target. Mar 18 07:04:05.099532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 18 07:04:05.099542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 18 07:04:05.099552 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 18 07:04:05.099572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 18 07:04:05.099584 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 18 07:04:05.099594 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 18 07:04:05.099605 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 18 07:04:05.099615 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 18 07:04:05.099628 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 18 07:04:05.099638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 18 07:04:05.099648 systemd[1]: Reached target paths.target - Path Units. Mar 18 07:04:05.099657 systemd[1]: Reached target slices.target - Slice Units. Mar 18 07:04:05.099667 systemd[1]: Reached target swap.target - Swaps. Mar 18 07:04:05.099677 systemd[1]: Reached target timers.target - Timer Units. Mar 18 07:04:05.099686 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 18 07:04:05.099696 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 18 07:04:05.099706 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 18 07:04:05.099718 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 18 07:04:05.099728 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 18 07:04:05.099737 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 18 07:04:05.099747 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 18 07:04:05.099757 systemd[1]: Reached target sockets.target - Socket Units. Mar 18 07:04:05.099767 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 18 07:04:05.099777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 18 07:04:05.099786 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 18 07:04:05.099796 systemd[1]: Starting systemd-fsck-usr.service... Mar 18 07:04:05.101328 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 18 07:04:05.101343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 18 07:04:05.101353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 18 07:04:05.101363 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 18 07:04:05.101393 systemd-journald[185]: Collecting audit messages is disabled. Mar 18 07:04:05.101422 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 18 07:04:05.101433 systemd[1]: Finished systemd-fsck-usr.service. Mar 18 07:04:05.101448 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 18 07:04:05.101460 systemd-journald[185]: Journal started Mar 18 07:04:05.101482 systemd-journald[185]: Runtime Journal (/run/log/journal/3529632c40924aaeadbd6af6f602ec80) is 8.0M, max 78.3M, 70.3M free. Mar 18 07:04:05.104829 systemd[1]: Started systemd-journald.service - Journal Service. Mar 18 07:04:05.105937 systemd-modules-load[186]: Inserted module 'overlay' Mar 18 07:04:05.155405 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 18 07:04:05.155430 kernel: Bridge firewalling registered Mar 18 07:04:05.116970 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 18 07:04:05.138596 systemd-modules-load[186]: Inserted module 'br_netfilter' Mar 18 07:04:05.163157 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 18 07:04:05.164136 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 18 07:04:05.164769 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 18 07:04:05.171121 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 18 07:04:05.174836 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 18 07:04:05.176683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 18 07:04:05.179537 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 18 07:04:05.193510 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 18 07:04:05.200023 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 18 07:04:05.201796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 18 07:04:05.202896 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 18 07:04:05.210780 dracut-cmdline[217]: dracut-dracut-053 Mar 18 07:04:05.215604 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 18 07:04:05.213996 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 18 07:04:05.245774 systemd-resolved[228]: Positive Trust Anchors: Mar 18 07:04:05.245787 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 18 07:04:05.245845 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 18 07:04:05.252390 systemd-resolved[228]: Defaulting to hostname 'linux'. Mar 18 07:04:05.253300 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 18 07:04:05.254607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 18 07:04:05.292861 kernel: SCSI subsystem initialized Mar 18 07:04:05.302868 kernel: Loading iSCSI transport class v2.0-870. Mar 18 07:04:05.314864 kernel: iscsi: registered transport (tcp) Mar 18 07:04:05.337867 kernel: iscsi: registered transport (qla4xxx) Mar 18 07:04:05.337933 kernel: QLogic iSCSI HBA Driver Mar 18 07:04:05.398274 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 18 07:04:05.409083 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 18 07:04:05.438429 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 18 07:04:05.438514 kernel: device-mapper: uevent: version 1.0.3 Mar 18 07:04:05.438544 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 18 07:04:05.499927 kernel: raid6: sse2x4 gen() 5047 MB/s Mar 18 07:04:05.517889 kernel: raid6: sse2x2 gen() 8611 MB/s Mar 18 07:04:05.536383 kernel: raid6: sse2x1 gen() 9968 MB/s Mar 18 07:04:05.536500 kernel: raid6: using algorithm sse2x1 gen() 9968 MB/s Mar 18 07:04:05.555342 kernel: raid6: .... xor() 7409 MB/s, rmw enabled Mar 18 07:04:05.555457 kernel: raid6: using ssse3x2 recovery algorithm Mar 18 07:04:05.577074 kernel: xor: measuring software checksum speed Mar 18 07:04:05.577190 kernel: prefetch64-sse : 18499 MB/sec Mar 18 07:04:05.580506 kernel: generic_sse : 15163 MB/sec Mar 18 07:04:05.580566 kernel: xor: using function: prefetch64-sse (18499 MB/sec) Mar 18 07:04:05.755410 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 18 07:04:05.770044 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 18 07:04:05.777943 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 18 07:04:05.813650 systemd-udevd[404]: Using default interface naming scheme 'v255'. Mar 18 07:04:05.823375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 18 07:04:05.834095 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 18 07:04:05.863548 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Mar 18 07:04:05.911575 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 18 07:04:05.918134 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 18 07:04:05.963727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 18 07:04:05.974992 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 18 07:04:06.016058 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 18 07:04:06.017389 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 18 07:04:06.019671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 18 07:04:06.022159 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 18 07:04:06.030390 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 18 07:04:06.049111 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 18 07:04:06.064853 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Mar 18 07:04:06.089697 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 18 07:04:06.089850 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 18 07:04:06.089865 kernel: GPT:17805311 != 20971519 Mar 18 07:04:06.089877 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 18 07:04:06.089889 kernel: GPT:17805311 != 20971519 Mar 18 07:04:06.089908 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 18 07:04:06.089919 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 18 07:04:06.079074 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 18 07:04:06.079155 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 18 07:04:06.079932 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 18 07:04:06.080618 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 18 07:04:06.080667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 18 07:04:06.081258 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 18 07:04:06.098226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 18 07:04:06.100827 kernel: libata version 3.00 loaded. Mar 18 07:04:06.105976 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 18 07:04:06.121284 kernel: scsi host0: ata_piix Mar 18 07:04:06.121420 kernel: scsi host1: ata_piix Mar 18 07:04:06.121534 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 18 07:04:06.121548 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 18 07:04:06.126854 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (457) Mar 18 07:04:06.132902 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (456) Mar 18 07:04:06.147693 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 18 07:04:06.193348 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 18 07:04:06.194335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 18 07:04:06.200682 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 18 07:04:06.201395 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 18 07:04:06.213882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 18 07:04:06.230356 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 18 07:04:06.234591 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 18 07:04:06.245536 disk-uuid[505]: Primary Header is updated. Mar 18 07:04:06.245536 disk-uuid[505]: Secondary Entries is updated. Mar 18 07:04:06.245536 disk-uuid[505]: Secondary Header is updated. Mar 18 07:04:06.256840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 18 07:04:06.276206 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 18 07:04:07.277939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 18 07:04:07.278608 disk-uuid[506]: The operation has completed successfully. Mar 18 07:04:07.357401 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 18 07:04:07.357603 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 18 07:04:07.382953 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 18 07:04:07.389327 sh[525]: Success Mar 18 07:04:07.416914 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 18 07:04:07.481396 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 18 07:04:07.491961 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 18 07:04:07.498347 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 18 07:04:07.516999 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 18 07:04:07.517067 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 18 07:04:07.517098 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 18 07:04:07.519139 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 18 07:04:07.520709 kernel: BTRFS info (device dm-0): using free space tree Mar 18 07:04:07.536613 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 18 07:04:07.537778 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 18 07:04:07.546988 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 18 07:04:07.552056 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 18 07:04:07.572083 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 18 07:04:07.580568 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 18 07:04:07.580625 kernel: BTRFS info (device vda6): using free space tree Mar 18 07:04:07.590848 kernel: BTRFS info (device vda6): auto enabling async discard Mar 18 07:04:07.603513 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 18 07:04:07.606748 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 18 07:04:07.624627 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 18 07:04:07.634979 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 18 07:04:07.718516 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 18 07:04:07.728968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 18 07:04:07.749720 systemd-networkd[709]: lo: Link UP Mar 18 07:04:07.750406 systemd-networkd[709]: lo: Gained carrier Mar 18 07:04:07.751564 systemd-networkd[709]: Enumeration completed Mar 18 07:04:07.753187 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 18 07:04:07.753193 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 18 07:04:07.755992 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 18 07:04:07.756586 systemd[1]: Reached target network.target - Network. Mar 18 07:04:07.757242 systemd-networkd[709]: eth0: Link UP Mar 18 07:04:07.757245 systemd-networkd[709]: eth0: Gained carrier Mar 18 07:04:07.757255 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 18 07:04:07.765849 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.138/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 18 07:04:07.791301 ignition[639]: Ignition 2.20.0 Mar 18 07:04:07.791315 ignition[639]: Stage: fetch-offline Mar 18 07:04:07.793088 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 18 07:04:07.791358 ignition[639]: no configs at "/usr/lib/ignition/base.d" Mar 18 07:04:07.791369 ignition[639]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 07:04:07.791463 ignition[639]: parsed url from cmdline: "" Mar 18 07:04:07.791467 ignition[639]: no config URL provided Mar 18 07:04:07.791473 ignition[639]: reading system config file "/usr/lib/ignition/user.ign" Mar 18 07:04:07.791482 ignition[639]: no config at "/usr/lib/ignition/user.ign" Mar 18 07:04:07.791486 ignition[639]: failed to fetch config: resource requires networking Mar 18 07:04:07.791682 ignition[639]: Ignition finished successfully Mar 18 07:04:07.800046 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 18 07:04:07.813006 ignition[718]: Ignition 2.20.0 Mar 18 07:04:07.813019 ignition[718]: Stage: fetch Mar 18 07:04:07.813205 ignition[718]: no configs at "/usr/lib/ignition/base.d" Mar 18 07:04:07.813217 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 07:04:07.813307 ignition[718]: parsed url from cmdline: "" Mar 18 07:04:07.813311 ignition[718]: no config URL provided Mar 18 07:04:07.813317 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Mar 18 07:04:07.813325 ignition[718]: no config at "/usr/lib/ignition/user.ign" Mar 18 07:04:07.813407 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 18 07:04:07.813466 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 18 07:04:07.813495 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 18 07:04:08.051509 ignition[718]: GET result: OK Mar 18 07:04:08.051700 ignition[718]: parsing config with SHA512: 800aaee35ad1074f6c8ac707fe09ce1d8c7c1039ffe4793e179d953ec276fb44b0ef4b50bfab2c766da54fc9ba9614faaee136ea686cccb90fe98ec071503287 Mar 18 07:04:08.059074 systemd-resolved[228]: Detected conflict on linux IN A 172.24.4.138 Mar 18 07:04:08.059101 systemd-resolved[228]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Mar 18 07:04:08.066461 unknown[718]: fetched base config from "system" Mar 18 07:04:08.066491 unknown[718]: fetched base config from "system" Mar 18 07:04:08.067636 ignition[718]: fetch: fetch complete Mar 18 07:04:08.066506 unknown[718]: fetched user config from "openstack" Mar 18 07:04:08.067649 ignition[718]: fetch: fetch passed Mar 18 07:04:08.071183 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 18 07:04:08.067738 ignition[718]: Ignition finished successfully Mar 18 07:04:08.081247 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 18 07:04:08.116582 ignition[725]: Ignition 2.20.0 Mar 18 07:04:08.116615 ignition[725]: Stage: kargs Mar 18 07:04:08.117117 ignition[725]: no configs at "/usr/lib/ignition/base.d" Mar 18 07:04:08.117144 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 07:04:08.119586 ignition[725]: kargs: kargs passed Mar 18 07:04:08.121803 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 18 07:04:08.119684 ignition[725]: Ignition finished successfully Mar 18 07:04:08.138626 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 18 07:04:08.165599 ignition[731]: Ignition 2.20.0 Mar 18 07:04:08.165625 ignition[731]: Stage: disks Mar 18 07:04:08.166101 ignition[731]: no configs at "/usr/lib/ignition/base.d" Mar 18 07:04:08.166128 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 07:04:08.168756 ignition[731]: disks: disks passed Mar 18 07:04:08.170890 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 18 07:04:08.168910 ignition[731]: Ignition finished successfully Mar 18 07:04:08.174090 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 18 07:04:08.176016 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 18 07:04:08.178706 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 18 07:04:08.181247 systemd[1]: Reached target sysinit.target - System Initialization. Mar 18 07:04:08.184219 systemd[1]: Reached target basic.target - Basic System. Mar 18 07:04:08.196316 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 18 07:04:08.226930 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 18 07:04:08.244612 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 18 07:04:08.251011 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 18 07:04:08.416854 kernel: EXT4-fs (vda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 18 07:04:08.417422 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 18 07:04:08.419110 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 18 07:04:08.424990 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 18 07:04:08.428311 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 18 07:04:08.432240 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 18 07:04:08.436030 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 18 07:04:08.443103 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) Mar 18 07:04:08.441644 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 18 07:04:08.441678 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 18 07:04:08.446933 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 18 07:04:08.470501 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 18 07:04:08.470525 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 18 07:04:08.470538 kernel: BTRFS info (device vda6): using free space tree Mar 18 07:04:08.480226 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 18 07:04:08.488033 kernel: BTRFS info (device vda6): auto enabling async discard Mar 18 07:04:08.501128 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 18 07:04:08.576035 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Mar 18 07:04:08.582360 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Mar 18 07:04:08.588055 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Mar 18 07:04:08.592104 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Mar 18 07:04:08.682917 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 18 07:04:08.687906 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 18 07:04:08.690945 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 18 07:04:08.697576 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 18 07:04:08.700765 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 18 07:04:08.730629 ignition[864]: INFO : Ignition 2.20.0 Mar 18 07:04:08.730629 ignition[864]: INFO : Stage: mount Mar 18 07:04:08.735056 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 18 07:04:08.735056 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 07:04:08.735056 ignition[864]: INFO : mount: mount passed Mar 18 07:04:08.735056 ignition[864]: INFO : Ignition finished successfully Mar 18 07:04:08.733363 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 18 07:04:08.741957 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 18 07:04:09.694157 systemd-networkd[709]: eth0: Gained IPv6LL Mar 18 07:04:15.647102 coreos-metadata[749]: Mar 18 07:04:15.646 WARN failed to locate config-drive, using the metadata service API instead Mar 18 07:04:15.684405 coreos-metadata[749]: Mar 18 07:04:15.684 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 18 07:04:15.702449 coreos-metadata[749]: Mar 18 07:04:15.702 INFO Fetch successful Mar 18 07:04:15.706207 coreos-metadata[749]: Mar 18 07:04:15.704 INFO wrote hostname ci-4152-2-2-a-a1f36745dc.novalocal to /sysroot/etc/hostname Mar 18 07:04:15.706707 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 18 07:04:15.706965 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 18 07:04:15.720055 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 18 07:04:15.760175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 18 07:04:15.777892 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (881) Mar 18 07:04:15.786347 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 18 07:04:15.786453 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 18 07:04:15.790645 kernel: BTRFS info (device vda6): using free space tree Mar 18 07:04:15.802247 kernel: BTRFS info (device vda6): auto enabling async discard Mar 18 07:04:15.807488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 18 07:04:15.849948 ignition[898]: INFO : Ignition 2.20.0 Mar 18 07:04:15.849948 ignition[898]: INFO : Stage: files Mar 18 07:04:15.852994 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 18 07:04:15.852994 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 07:04:15.852994 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Mar 18 07:04:15.858349 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 18 07:04:15.858349 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 18 07:04:15.865477 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 18 07:04:15.867619 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 18 07:04:15.867619 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 18 07:04:15.866637 unknown[898]: wrote ssh authorized keys file for user: core Mar 18 07:04:15.872975 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 18 07:04:15.872975 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 18 07:04:15.872975 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 18 07:04:15.872975 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 18 07:04:15.946687 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 18 07:04:16.242727 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 18 07:04:16.242727 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 18 07:04:16.242727 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 18 07:04:16.939344 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 18 07:04:17.354572 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 18 07:04:17.354572 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 18 07:04:17.359249 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 18 07:04:17.825412 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 18 07:04:19.595885 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 18 07:04:19.595885 ignition[898]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 18 07:04:19.605653 ignition[898]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 18 07:04:19.610006 ignition[898]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 18 07:04:19.610006 ignition[898]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 18 07:04:19.610006 ignition[898]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 18 07:04:19.610006 ignition[898]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 18 07:04:19.610006 ignition[898]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 18 07:04:19.610006 ignition[898]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 18 07:04:19.610006 ignition[898]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 18 07:04:19.610006 ignition[898]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 18 07:04:19.610006 ignition[898]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 18 07:04:19.610006 ignition[898]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 18 07:04:19.610006 ignition[898]: INFO : files: files passed Mar 18 07:04:19.610006 ignition[898]: INFO : Ignition finished successfully Mar 18 07:04:19.608193 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 18 07:04:19.620601 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 18 07:04:19.624395 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 18 07:04:19.654768 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 18 07:04:19.654768 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 18 07:04:19.655369 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 18 07:04:19.655471 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 18 07:04:19.662082 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 18 07:04:19.663432 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 18 07:04:19.664725 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 18 07:04:19.669058 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 18 07:04:19.702940 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 18 07:04:19.703107 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 18 07:04:19.703946 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 18 07:04:19.705413 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 18 07:04:19.707306 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 18 07:04:19.720947 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 18 07:04:19.736755 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 18 07:04:19.743072 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 18 07:04:19.758040 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 18 07:04:19.760609 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 18 07:04:19.761907 systemd[1]: Stopped target timers.target - Timer Units. Mar 18 07:04:19.764188 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 18 07:04:19.764403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 18 07:04:19.767311 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 18 07:04:19.768994 systemd[1]: Stopped target basic.target - Basic System. Mar 18 07:04:19.771629 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 18 07:04:19.773897 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 18 07:04:19.776207 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 18 07:04:19.778836 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 18 07:04:19.781514 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 18 07:04:19.784277 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 18 07:04:19.786859 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 18 07:04:19.789602 systemd[1]: Stopped target swap.target - Swaps. Mar 18 07:04:19.792419 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 18 07:04:19.792692 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 18 07:04:19.795898 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 18 07:04:19.797794 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 18 07:04:19.800420 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 18 07:04:19.800663 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 18 07:04:19.803573 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 18 07:04:19.804013 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 18 07:04:19.807744 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 18 07:04:19.808208 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 18 07:04:19.811218 systemd[1]: ignition-files.service: Deactivated successfully. Mar 18 07:04:19.811480 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 18 07:04:19.821910 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 18 07:04:19.825611 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 18 07:04:19.826106 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 18 07:04:19.835346 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 18 07:04:19.837902 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 18 07:04:19.838242 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 18 07:04:19.842341 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 18 07:04:19.842460 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 18 07:04:19.851691 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 18 07:04:19.851787 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 18 07:04:19.864220 ignition[952]: INFO : Ignition 2.20.0 Mar 18 07:04:19.864220 ignition[952]: INFO : Stage: umount Mar 18 07:04:19.864220 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 18 07:04:19.864220 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 07:04:19.864220 ignition[952]: INFO : umount: umount passed Mar 18 07:04:19.864220 ignition[952]: INFO : Ignition finished successfully Mar 18 07:04:19.863285 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 18 07:04:19.863533 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 18 07:04:19.865133 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 18 07:04:19.865178 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 18 07:04:19.866674 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 18 07:04:19.866718 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 18 07:04:19.868450 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 18 07:04:19.868492 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 18 07:04:19.874164 systemd[1]: Stopped target network.target - Network. Mar 18 07:04:19.875131 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 18 07:04:19.875177 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 18 07:04:19.876289 systemd[1]: Stopped target paths.target - Path Units. Mar 18 07:04:19.877250 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 18 07:04:19.880849 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 18 07:04:19.881381 systemd[1]: Stopped target slices.target - Slice Units. Mar 18 07:04:19.881852 systemd[1]: Stopped target sockets.target - Socket Units. Mar 18 07:04:19.882336 systemd[1]: iscsid.socket: Deactivated successfully. Mar 18 07:04:19.882371 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 18 07:04:19.882898 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 18 07:04:19.882931 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 18 07:04:19.884091 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 18 07:04:19.884133 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 18 07:04:19.885463 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 18 07:04:19.885502 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 18 07:04:19.886671 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 18 07:04:19.888471 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 18 07:04:19.891683 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 18 07:04:19.891880 systemd-networkd[709]: eth0: DHCPv6 lease lost Mar 18 07:04:19.892373 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 18 07:04:19.892463 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 18 07:04:19.894019 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 18 07:04:19.894109 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 18 07:04:19.895593 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 18 07:04:19.895638 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 18 07:04:19.896771 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 18 07:04:19.896998 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 18 07:04:19.914963 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 18 07:04:19.915548 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 18 07:04:19.916985 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 18 07:04:19.918432 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 18 07:04:19.919287 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 18 07:04:19.919377 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 18 07:04:19.927144 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 18 07:04:19.927804 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 18 07:04:19.930601 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 18 07:04:19.930722 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 18 07:04:19.932356 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 18 07:04:19.932421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 18 07:04:19.933149 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 18 07:04:19.933180 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 18 07:04:19.934306 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 18 07:04:19.934350 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 18 07:04:19.935981 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 18 07:04:19.936022 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 18 07:04:19.937150 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 18 07:04:19.937194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 18 07:04:19.947981 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 18 07:04:19.950140 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 18 07:04:19.950192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 18 07:04:19.951403 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 18 07:04:19.951446 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 18 07:04:19.955038 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 18 07:04:19.955096 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 18 07:04:19.956197 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 18 07:04:19.956239 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 18 07:04:19.958147 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 18 07:04:19.958187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 18 07:04:19.959729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 18 07:04:19.959851 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 18 07:04:19.961010 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 18 07:04:19.969008 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 18 07:04:19.978348 systemd[1]: Switching root. Mar 18 07:04:20.013034 systemd-journald[185]: Journal stopped Mar 18 07:04:21.705454 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Mar 18 07:04:21.705506 kernel: SELinux: policy capability network_peer_controls=1 Mar 18 07:04:21.705525 kernel: SELinux: policy capability open_perms=1 Mar 18 07:04:21.705539 kernel: SELinux: policy capability extended_socket_class=1 Mar 18 07:04:21.705550 kernel: SELinux: policy capability always_check_network=0 Mar 18 07:04:21.705561 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 18 07:04:21.705572 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 18 07:04:21.705583 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 18 07:04:21.705597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 18 07:04:21.705609 systemd[1]: Successfully loaded SELinux policy in 67.336ms. Mar 18 07:04:21.705633 kernel: audit: type=1403 audit(1742281460.718:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 18 07:04:21.705645 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.926ms. Mar 18 07:04:21.705658 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 18 07:04:21.705670 systemd[1]: Detected virtualization kvm. Mar 18 07:04:21.705682 systemd[1]: Detected architecture x86-64. Mar 18 07:04:21.705694 systemd[1]: Detected first boot. Mar 18 07:04:21.705708 systemd[1]: Hostname set to . Mar 18 07:04:21.705720 systemd[1]: Initializing machine ID from VM UUID. Mar 18 07:04:21.705732 zram_generator::config[1011]: No configuration found. Mar 18 07:04:21.705745 systemd[1]: Populated /etc with preset unit settings. Mar 18 07:04:21.705760 systemd[1]: Queued start job for default target multi-user.target. Mar 18 07:04:21.705772 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 18 07:04:21.705784 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 18 07:04:21.705796 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 18 07:04:21.705840 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 18 07:04:21.705855 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 18 07:04:21.705871 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 18 07:04:21.705883 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 18 07:04:21.705896 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 18 07:04:21.705908 systemd[1]: Created slice user.slice - User and Session Slice. Mar 18 07:04:21.705920 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 18 07:04:21.705934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 18 07:04:21.705946 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 18 07:04:21.705961 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 18 07:04:21.705973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 18 07:04:21.705985 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 18 07:04:21.705997 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 18 07:04:21.706009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 18 07:04:21.706020 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 18 07:04:21.706032 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 18 07:04:21.706044 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 18 07:04:21.706059 systemd[1]: Reached target slices.target - Slice Units. Mar 18 07:04:21.706071 systemd[1]: Reached target swap.target - Swaps. Mar 18 07:04:21.706083 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 18 07:04:21.706095 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 18 07:04:21.706107 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 18 07:04:21.706119 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 18 07:04:21.706131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 18 07:04:21.706143 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 18 07:04:21.706157 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 18 07:04:21.706169 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 18 07:04:21.706181 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 18 07:04:21.706194 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 18 07:04:21.706205 systemd[1]: Mounting media.mount - External Media Directory... Mar 18 07:04:21.706217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 07:04:21.706229 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 18 07:04:21.706241 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 18 07:04:21.706252 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 18 07:04:21.706266 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 18 07:04:21.706278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 18 07:04:21.706290 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 18 07:04:21.706302 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 18 07:04:21.706314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 18 07:04:21.706326 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 18 07:04:21.706337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 18 07:04:21.706350 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 18 07:04:21.706364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 18 07:04:21.706376 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 18 07:04:21.706388 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 18 07:04:21.706401 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 18 07:04:21.706413 kernel: loop: module loaded Mar 18 07:04:21.706424 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 18 07:04:21.706439 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 18 07:04:21.706452 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 18 07:04:21.706464 kernel: ACPI: bus type drm_connector registered Mar 18 07:04:21.706492 systemd-journald[1120]: Collecting audit messages is disabled. Mar 18 07:04:21.706518 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 18 07:04:21.706531 systemd-journald[1120]: Journal started Mar 18 07:04:21.706555 systemd-journald[1120]: Runtime Journal (/run/log/journal/3529632c40924aaeadbd6af6f602ec80) is 8.0M, max 78.3M, 70.3M free. Mar 18 07:04:21.714516 kernel: fuse: init (API version 7.39) Mar 18 07:04:21.727659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 18 07:04:21.731853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 07:04:21.738283 systemd[1]: Started systemd-journald.service - Journal Service. Mar 18 07:04:21.740912 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 18 07:04:21.743356 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 18 07:04:21.743996 systemd[1]: Mounted media.mount - External Media Directory. Mar 18 07:04:21.744547 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 18 07:04:21.745255 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 18 07:04:21.745886 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 18 07:04:21.746663 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 18 07:04:21.747575 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 18 07:04:21.748399 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 18 07:04:21.748576 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 18 07:04:21.749389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 18 07:04:21.749551 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 18 07:04:21.750297 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 18 07:04:21.750450 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 18 07:04:21.751576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 18 07:04:21.751728 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 18 07:04:21.752500 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 18 07:04:21.752648 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 18 07:04:21.753464 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 18 07:04:21.756227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 18 07:04:21.757944 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 18 07:04:21.758724 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 18 07:04:21.759579 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 18 07:04:21.769499 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 18 07:04:21.776973 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 18 07:04:21.782917 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 18 07:04:21.783529 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 18 07:04:21.806078 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 18 07:04:21.811968 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 18 07:04:21.812771 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 18 07:04:21.818017 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 18 07:04:21.819931 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 18 07:04:21.822678 systemd-journald[1120]: Time spent on flushing to /var/log/journal/3529632c40924aaeadbd6af6f602ec80 is 32.994ms for 933 entries. Mar 18 07:04:21.822678 systemd-journald[1120]: System Journal (/var/log/journal/3529632c40924aaeadbd6af6f602ec80) is 8.0M, max 584.8M, 576.8M free. Mar 18 07:04:21.905292 systemd-journald[1120]: Received client request to flush runtime journal. Mar 18 07:04:21.826938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 18 07:04:21.829603 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 18 07:04:21.833534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 18 07:04:21.835502 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 18 07:04:21.836146 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 18 07:04:21.848574 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 18 07:04:21.858944 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 18 07:04:21.859577 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 18 07:04:21.868546 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 18 07:04:21.880689 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 18 07:04:21.907446 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 18 07:04:21.912770 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 18 07:04:21.912791 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 18 07:04:21.918540 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 18 07:04:21.928015 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 18 07:04:21.956450 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 18 07:04:21.966052 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 18 07:04:21.979462 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 18 07:04:21.979484 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 18 07:04:21.984083 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 18 07:04:22.505770 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 18 07:04:22.514074 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 18 07:04:22.547225 systemd-udevd[1195]: Using default interface naming scheme 'v255'. Mar 18 07:04:22.578022 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 18 07:04:22.593157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 18 07:04:22.660045 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 18 07:04:22.681842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1197) Mar 18 07:04:22.685106 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 18 07:04:22.735449 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 18 07:04:22.750839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 18 07:04:22.766210 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 18 07:04:22.767406 kernel: ACPI: button: Power Button [PWRF] Mar 18 07:04:22.770906 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 18 07:04:22.770240 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 18 07:04:22.826535 kernel: mousedev: PS/2 mouse device common for all mice Mar 18 07:04:22.842123 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 18 07:04:22.842205 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 18 07:04:22.850261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 18 07:04:22.851351 kernel: Console: switching to colour dummy device 80x25 Mar 18 07:04:22.852698 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 18 07:04:22.852828 kernel: [drm] features: -context_init Mar 18 07:04:22.856918 kernel: [drm] number of scanouts: 1 Mar 18 07:04:22.856964 kernel: [drm] number of cap sets: 0 Mar 18 07:04:22.862834 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 18 07:04:22.864207 systemd-networkd[1210]: lo: Link UP Mar 18 07:04:22.864524 systemd-networkd[1210]: lo: Gained carrier Mar 18 07:04:22.867528 systemd-networkd[1210]: Enumeration completed Mar 18 07:04:22.868308 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 18 07:04:22.870161 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 18 07:04:22.870486 systemd-networkd[1210]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 18 07:04:22.871471 systemd-networkd[1210]: eth0: Link UP Mar 18 07:04:22.871545 systemd-networkd[1210]: eth0: Gained carrier Mar 18 07:04:22.871607 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 18 07:04:22.871838 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 18 07:04:22.874983 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 18 07:04:22.880510 kernel: Console: switching to colour frame buffer device 160x50 Mar 18 07:04:22.888972 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 18 07:04:22.890399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 18 07:04:22.890661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 18 07:04:22.892386 systemd-networkd[1210]: eth0: DHCPv4 address 172.24.4.138/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 18 07:04:22.899429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 18 07:04:22.905089 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 18 07:04:22.913994 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 18 07:04:22.915898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 18 07:04:22.916208 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 18 07:04:22.919193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 18 07:04:22.935864 lvm[1241]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 18 07:04:22.974600 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 18 07:04:22.978634 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 18 07:04:22.993201 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 18 07:04:22.997585 lvm[1247]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 18 07:04:23.022566 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 18 07:04:23.024492 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 18 07:04:23.026235 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 18 07:04:23.026318 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 18 07:04:23.026504 systemd[1]: Reached target machines.target - Containers. Mar 18 07:04:23.030354 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 18 07:04:23.039129 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 18 07:04:23.044143 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 18 07:04:23.046708 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 18 07:04:23.049785 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 18 07:04:23.057119 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 18 07:04:23.065111 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 18 07:04:23.069942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 18 07:04:23.072595 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 18 07:04:23.083607 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 18 07:04:23.109175 kernel: loop0: detected capacity change from 0 to 138184 Mar 18 07:04:23.125060 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 18 07:04:23.125861 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 18 07:04:23.153877 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 18 07:04:23.186857 kernel: loop1: detected capacity change from 0 to 8 Mar 18 07:04:23.205172 kernel: loop2: detected capacity change from 0 to 210664 Mar 18 07:04:23.265881 kernel: loop3: detected capacity change from 0 to 140992 Mar 18 07:04:23.320855 kernel: loop4: detected capacity change from 0 to 138184 Mar 18 07:04:23.379883 kernel: loop5: detected capacity change from 0 to 8 Mar 18 07:04:23.388157 kernel: loop6: detected capacity change from 0 to 210664 Mar 18 07:04:23.429885 kernel: loop7: detected capacity change from 0 to 140992 Mar 18 07:04:23.463773 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 18 07:04:23.464774 (sd-merge)[1274]: Merged extensions into '/usr'. Mar 18 07:04:23.475164 systemd[1]: Reloading requested from client PID 1258 ('systemd-sysext') (unit systemd-sysext.service)... Mar 18 07:04:23.475198 systemd[1]: Reloading... Mar 18 07:04:23.569856 zram_generator::config[1302]: No configuration found. Mar 18 07:04:23.753158 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 18 07:04:23.822581 systemd[1]: Reloading finished in 346 ms. Mar 18 07:04:23.844674 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 18 07:04:23.856672 systemd[1]: Starting ensure-sysext.service... Mar 18 07:04:23.871976 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 18 07:04:23.890568 systemd[1]: Reloading requested from client PID 1363 ('systemctl') (unit ensure-sysext.service)... Mar 18 07:04:23.890586 systemd[1]: Reloading... Mar 18 07:04:23.910530 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 18 07:04:23.911018 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 18 07:04:23.912158 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 18 07:04:23.912471 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Mar 18 07:04:23.912527 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Mar 18 07:04:23.916177 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Mar 18 07:04:23.916190 systemd-tmpfiles[1364]: Skipping /boot Mar 18 07:04:23.925947 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Mar 18 07:04:23.925959 systemd-tmpfiles[1364]: Skipping /boot Mar 18 07:04:23.933747 ldconfig[1255]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 18 07:04:23.968852 zram_generator::config[1390]: No configuration found. Mar 18 07:04:24.128579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 18 07:04:24.193781 systemd[1]: Reloading finished in 302 ms. Mar 18 07:04:24.213156 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 18 07:04:24.221453 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 18 07:04:24.250439 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 18 07:04:24.260989 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 18 07:04:24.274587 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 18 07:04:24.284042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 18 07:04:24.305226 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 18 07:04:24.313237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 07:04:24.314020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 18 07:04:24.317920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 18 07:04:24.329472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 18 07:04:24.344922 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 18 07:04:24.345637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 18 07:04:24.345761 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 07:04:24.355176 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 18 07:04:24.362891 augenrules[1490]: No rules Mar 18 07:04:24.363303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 18 07:04:24.365302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 18 07:04:24.367697 systemd[1]: audit-rules.service: Deactivated successfully. Mar 18 07:04:24.373028 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 18 07:04:24.375585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 18 07:04:24.375777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 18 07:04:24.384510 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 18 07:04:24.384984 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 18 07:04:24.398326 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 18 07:04:24.409451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 07:04:24.418018 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 18 07:04:24.421553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 18 07:04:24.430065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 18 07:04:24.435965 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 18 07:04:24.441850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 18 07:04:24.451016 systemd-resolved[1470]: Positive Trust Anchors: Mar 18 07:04:24.451033 systemd-resolved[1470]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 18 07:04:24.451079 systemd-resolved[1470]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 18 07:04:24.455692 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 18 07:04:24.456432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 18 07:04:24.463547 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 18 07:04:24.465953 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 07:04:24.469336 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 18 07:04:24.476434 systemd-resolved[1470]: Using system hostname 'ci-4152-2-2-a-a1f36745dc.novalocal'. Mar 18 07:04:24.481044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 18 07:04:24.481225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 18 07:04:24.487285 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 18 07:04:24.487799 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 18 07:04:24.491239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 18 07:04:24.494258 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 18 07:04:24.494522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 18 07:04:24.497718 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 18 07:04:24.499536 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 18 07:04:24.512067 augenrules[1506]: /sbin/augenrules: No change Mar 18 07:04:24.505508 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 18 07:04:24.511795 systemd[1]: Finished ensure-sysext.service. Mar 18 07:04:24.519827 augenrules[1538]: No rules Mar 18 07:04:24.518007 systemd[1]: audit-rules.service: Deactivated successfully. Mar 18 07:04:24.518250 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 18 07:04:24.525913 systemd[1]: Reached target network.target - Network. Mar 18 07:04:24.527453 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 18 07:04:24.528047 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 18 07:04:24.528196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 18 07:04:24.532921 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 18 07:04:24.533652 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 18 07:04:24.542048 systemd-networkd[1210]: eth0: Gained IPv6LL Mar 18 07:04:24.547088 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 18 07:04:24.549798 systemd[1]: Reached target network-online.target - Network is Online. Mar 18 07:04:24.602883 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 18 07:04:24.603582 systemd[1]: Reached target sysinit.target - System Initialization. Mar 18 07:04:24.604156 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 18 07:04:24.604630 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 18 07:04:24.608319 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 18 07:04:24.610095 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 18 07:04:24.610331 systemd[1]: Reached target paths.target - Path Units. Mar 18 07:04:24.611640 systemd[1]: Reached target time-set.target - System Time Set. Mar 18 07:04:24.614192 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 18 07:04:24.616835 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 18 07:04:24.619019 systemd[1]: Reached target timers.target - Timer Units. Mar 18 07:04:24.622426 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 18 07:04:24.627472 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 18 07:04:24.635270 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 18 07:04:24.638535 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 18 07:04:24.641185 systemd[1]: Reached target sockets.target - Socket Units. Mar 18 07:04:24.643642 systemd[1]: Reached target basic.target - Basic System. Mar 18 07:04:24.645724 systemd[1]: System is tainted: cgroupsv1 Mar 18 07:04:24.645887 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 18 07:04:24.645955 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 18 07:04:24.658969 systemd[1]: Starting containerd.service - containerd container runtime... Mar 18 07:04:24.668869 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 18 07:04:24.677941 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 18 07:04:24.692023 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 18 07:04:24.698746 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 18 07:04:24.702283 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 18 07:04:24.710833 jq[1562]: false Mar 18 07:04:24.713093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:04:24.724117 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 18 07:04:24.735995 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 18 07:04:24.742987 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 18 07:04:24.751693 dbus-daemon[1559]: [system] SELinux support is enabled Mar 18 07:04:24.756777 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 18 07:04:24.773027 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 18 07:04:24.780391 extend-filesystems[1563]: Found loop4 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found loop5 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found loop6 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found loop7 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found vda Mar 18 07:04:24.780391 extend-filesystems[1563]: Found vda1 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found vda2 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found vda3 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found usr Mar 18 07:04:24.780391 extend-filesystems[1563]: Found vda4 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found vda6 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found vda7 Mar 18 07:04:24.780391 extend-filesystems[1563]: Found vda9 Mar 18 07:04:24.780391 extend-filesystems[1563]: Checking size of /dev/vda9 Mar 18 07:04:24.790457 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 18 07:04:24.794205 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 18 07:04:24.803148 systemd[1]: Starting update-engine.service - Update Engine... Mar 18 07:04:24.820020 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 18 07:04:24.821759 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 18 07:04:25.538914 systemd-resolved[1470]: Clock change detected. Flushing caches. Mar 18 07:04:25.539084 systemd-timesyncd[1550]: Contacted time server 23.168.136.132:123 (0.flatcar.pool.ntp.org). Mar 18 07:04:25.539131 systemd-timesyncd[1550]: Initial clock synchronization to Tue 2025-03-18 07:04:25.538861 UTC. Mar 18 07:04:25.548003 extend-filesystems[1563]: Resized partition /dev/vda9 Mar 18 07:04:25.556911 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 18 07:04:25.558006 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 18 07:04:25.561684 extend-filesystems[1597]: resize2fs 1.47.1 (20-May-2024) Mar 18 07:04:25.561282 systemd[1]: motdgen.service: Deactivated successfully. Mar 18 07:04:25.561521 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 18 07:04:25.576459 jq[1592]: true Mar 18 07:04:25.588656 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 18 07:04:25.600665 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 18 07:04:25.589093 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 18 07:04:25.676598 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1209) Mar 18 07:04:25.676709 update_engine[1589]: I20250318 07:04:25.611176 1589 main.cc:92] Flatcar Update Engine starting Mar 18 07:04:25.676709 update_engine[1589]: I20250318 07:04:25.619635 1589 update_check_scheduler.cc:74] Next update check in 9m2s Mar 18 07:04:25.608578 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 18 07:04:25.608843 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 18 07:04:25.634189 (ntainerd)[1606]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 18 07:04:25.704903 jq[1605]: true Mar 18 07:04:25.705101 extend-filesystems[1597]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 18 07:04:25.705101 extend-filesystems[1597]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 18 07:04:25.705101 extend-filesystems[1597]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 18 07:04:25.653904 systemd[1]: Started update-engine.service - Update Engine. Mar 18 07:04:25.715667 extend-filesystems[1563]: Resized filesystem in /dev/vda9 Mar 18 07:04:25.716229 tar[1603]: linux-amd64/helm Mar 18 07:04:25.668674 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 18 07:04:25.668735 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 18 07:04:25.669506 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 18 07:04:25.669525 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 18 07:04:25.672708 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 18 07:04:25.681600 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 18 07:04:25.696843 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 18 07:04:25.697113 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 18 07:04:25.775264 systemd-logind[1585]: New seat seat0. Mar 18 07:04:25.778284 systemd-logind[1585]: Watching system buttons on /dev/input/event1 (Power Button) Mar 18 07:04:25.778306 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 18 07:04:25.778593 systemd[1]: Started systemd-logind.service - User Login Management. Mar 18 07:04:25.807335 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Mar 18 07:04:25.808860 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 18 07:04:25.824804 systemd[1]: Starting sshkeys.service... Mar 18 07:04:25.855119 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 18 07:04:25.868810 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 18 07:04:25.920694 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 18 07:04:26.144780 sshd_keygen[1598]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 18 07:04:26.155984 containerd[1606]: time="2025-03-18T07:04:26.155894806Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 18 07:04:26.208814 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 18 07:04:26.220331 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 18 07:04:26.227236 systemd[1]: issuegen.service: Deactivated successfully. Mar 18 07:04:26.228572 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 18 07:04:26.232253 containerd[1606]: time="2025-03-18T07:04:26.232056116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.233992167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234025440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234044686Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234221788Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234241835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234310935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234326915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234569440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234588756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234604956Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 18 07:04:26.236006 containerd[1606]: time="2025-03-18T07:04:26.234616799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 18 07:04:26.238835 containerd[1606]: time="2025-03-18T07:04:26.234699384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 18 07:04:26.238835 containerd[1606]: time="2025-03-18T07:04:26.234913916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 18 07:04:26.238835 containerd[1606]: time="2025-03-18T07:04:26.235049290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 18 07:04:26.238835 containerd[1606]: time="2025-03-18T07:04:26.235066171Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 18 07:04:26.238835 containerd[1606]: time="2025-03-18T07:04:26.235155158Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 18 07:04:26.238835 containerd[1606]: time="2025-03-18T07:04:26.235205974Z" level=info msg="metadata content store policy set" policy=shared Mar 18 07:04:26.240820 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 18 07:04:26.256759 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.269414742Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.269530449Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.269628703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.269670111Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.269688696Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.269849087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.270188834Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.270288250Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.270306956Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.270323196Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.270339096Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.270355817Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.270369833Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 18 07:04:26.272140 containerd[1606]: time="2025-03-18T07:04:26.270385333Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 18 07:04:26.270888 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 18 07:04:26.272700 containerd[1606]: time="2025-03-18T07:04:26.270402655Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 18 07:04:26.272700 containerd[1606]: time="2025-03-18T07:04:26.270417703Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 18 07:04:26.276084 containerd[1606]: time="2025-03-18T07:04:26.270432090Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 18 07:04:26.276084 containerd[1606]: time="2025-03-18T07:04:26.276005183Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 18 07:04:26.276084 containerd[1606]: time="2025-03-18T07:04:26.276044256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276084 containerd[1606]: time="2025-03-18T07:04:26.276062230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276084 containerd[1606]: time="2025-03-18T07:04:26.276078039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276228 containerd[1606]: time="2025-03-18T07:04:26.276094530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276228 containerd[1606]: time="2025-03-18T07:04:26.276128794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276228 containerd[1606]: time="2025-03-18T07:04:26.276146017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276228 containerd[1606]: time="2025-03-18T07:04:26.276160514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276228 containerd[1606]: time="2025-03-18T07:04:26.276178407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276228 containerd[1606]: time="2025-03-18T07:04:26.276194247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276228 containerd[1606]: time="2025-03-18T07:04:26.276213313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276228 containerd[1606]: time="2025-03-18T07:04:26.276227560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276404 containerd[1606]: time="2025-03-18T07:04:26.276243159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276404 containerd[1606]: time="2025-03-18T07:04:26.276258397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276404 containerd[1606]: time="2025-03-18T07:04:26.276276141Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 18 07:04:26.276404 containerd[1606]: time="2025-03-18T07:04:26.276304183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276404 containerd[1606]: time="2025-03-18T07:04:26.276322077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276404 containerd[1606]: time="2025-03-18T07:04:26.276335392Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 18 07:04:26.276404 containerd[1606]: time="2025-03-18T07:04:26.276385195Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 18 07:04:26.276584 containerd[1606]: time="2025-03-18T07:04:26.276407277Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 18 07:04:26.276584 containerd[1606]: time="2025-03-18T07:04:26.276420692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 18 07:04:26.276584 containerd[1606]: time="2025-03-18T07:04:26.276542370Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 18 07:04:26.276584 containerd[1606]: time="2025-03-18T07:04:26.276560093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.276584 containerd[1606]: time="2025-03-18T07:04:26.276576234Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 18 07:04:26.276699 containerd[1606]: time="2025-03-18T07:04:26.276590460Z" level=info msg="NRI interface is disabled by configuration." Mar 18 07:04:26.276699 containerd[1606]: time="2025-03-18T07:04:26.276605388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 18 07:04:26.277713 containerd[1606]: time="2025-03-18T07:04:26.276956397Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 18 07:04:26.277713 containerd[1606]: time="2025-03-18T07:04:26.277027660Z" level=info msg="Connect containerd service" Mar 18 07:04:26.277713 containerd[1606]: time="2025-03-18T07:04:26.277068617Z" level=info msg="using legacy CRI server" Mar 18 07:04:26.277713 containerd[1606]: time="2025-03-18T07:04:26.277077073Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 18 07:04:26.277713 containerd[1606]: time="2025-03-18T07:04:26.277200124Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 18 07:04:26.280652 containerd[1606]: time="2025-03-18T07:04:26.280600050Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 18 07:04:26.284545 containerd[1606]: time="2025-03-18T07:04:26.281090280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 18 07:04:26.284545 containerd[1606]: time="2025-03-18T07:04:26.281145052Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 18 07:04:26.284545 containerd[1606]: time="2025-03-18T07:04:26.281226315Z" level=info msg="Start subscribing containerd event" Mar 18 07:04:26.284545 containerd[1606]: time="2025-03-18T07:04:26.281267773Z" level=info msg="Start recovering state" Mar 18 07:04:26.284545 containerd[1606]: time="2025-03-18T07:04:26.281325000Z" level=info msg="Start event monitor" Mar 18 07:04:26.284545 containerd[1606]: time="2025-03-18T07:04:26.281342182Z" level=info msg="Start snapshots syncer" Mar 18 07:04:26.284545 containerd[1606]: time="2025-03-18T07:04:26.281353323Z" level=info msg="Start cni network conf syncer for default" Mar 18 07:04:26.284545 containerd[1606]: time="2025-03-18T07:04:26.281361498Z" level=info msg="Start streaming server" Mar 18 07:04:26.284545 containerd[1606]: time="2025-03-18T07:04:26.281409268Z" level=info msg="containerd successfully booted in 0.127705s" Mar 18 07:04:26.284827 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 18 07:04:26.291853 systemd[1]: Reached target getty.target - Login Prompts. Mar 18 07:04:26.297066 systemd[1]: Started containerd.service - containerd container runtime. Mar 18 07:04:26.527197 tar[1603]: linux-amd64/LICENSE Mar 18 07:04:26.527197 tar[1603]: linux-amd64/README.md Mar 18 07:04:26.540180 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 18 07:04:27.805657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:04:27.809926 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 18 07:04:28.171239 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 18 07:04:28.185236 systemd[1]: Started sshd@0-172.24.4.138:22-172.24.4.1:35044.service - OpenSSH per-connection server daemon (172.24.4.1:35044). Mar 18 07:04:29.049089 kubelet[1691]: E0318 07:04:29.048962 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 07:04:29.052382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 07:04:29.052803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 07:04:29.361053 sshd[1696]: Accepted publickey for core from 172.24.4.1 port 35044 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:04:29.366106 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:04:29.393279 systemd-logind[1585]: New session 1 of user core. Mar 18 07:04:29.398056 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 18 07:04:29.411180 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 18 07:04:29.453053 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 18 07:04:29.478133 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 18 07:04:29.500040 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 18 07:04:29.651289 systemd[1708]: Queued start job for default target default.target. Mar 18 07:04:29.651683 systemd[1708]: Created slice app.slice - User Application Slice. Mar 18 07:04:29.651704 systemd[1708]: Reached target paths.target - Paths. Mar 18 07:04:29.651720 systemd[1708]: Reached target timers.target - Timers. Mar 18 07:04:29.656665 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 18 07:04:29.667744 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 18 07:04:29.668051 systemd[1708]: Reached target sockets.target - Sockets. Mar 18 07:04:29.668095 systemd[1708]: Reached target basic.target - Basic System. Mar 18 07:04:29.668205 systemd[1708]: Reached target default.target - Main User Target. Mar 18 07:04:29.668270 systemd[1708]: Startup finished in 154ms. Mar 18 07:04:29.668968 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 18 07:04:29.675739 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 18 07:04:30.076590 systemd[1]: Started sshd@1-172.24.4.138:22-172.24.4.1:35060.service - OpenSSH per-connection server daemon (172.24.4.1:35060). Mar 18 07:04:31.327759 login[1673]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 18 07:04:31.343681 systemd-logind[1585]: New session 2 of user core. Mar 18 07:04:31.346618 login[1674]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 18 07:04:31.348394 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 18 07:04:31.367598 systemd-logind[1585]: New session 3 of user core. Mar 18 07:04:31.379147 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 18 07:04:31.631809 sshd[1720]: Accepted publickey for core from 172.24.4.1 port 35060 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:04:31.634863 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:04:31.645025 systemd-logind[1585]: New session 4 of user core. Mar 18 07:04:31.656256 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 18 07:04:32.262588 sshd[1749]: Connection closed by 172.24.4.1 port 35060 Mar 18 07:04:32.263961 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Mar 18 07:04:32.276971 systemd[1]: Started sshd@2-172.24.4.138:22-172.24.4.1:35068.service - OpenSSH per-connection server daemon (172.24.4.1:35068). Mar 18 07:04:32.279424 systemd[1]: sshd@1-172.24.4.138:22-172.24.4.1:35060.service: Deactivated successfully. Mar 18 07:04:32.285300 systemd[1]: session-4.scope: Deactivated successfully. Mar 18 07:04:32.295776 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Mar 18 07:04:32.298920 systemd-logind[1585]: Removed session 4. Mar 18 07:04:32.474241 coreos-metadata[1558]: Mar 18 07:04:32.474 WARN failed to locate config-drive, using the metadata service API instead Mar 18 07:04:32.523416 coreos-metadata[1558]: Mar 18 07:04:32.523 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 18 07:04:32.764282 coreos-metadata[1558]: Mar 18 07:04:32.764 INFO Fetch successful Mar 18 07:04:32.764282 coreos-metadata[1558]: Mar 18 07:04:32.764 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 18 07:04:32.777964 coreos-metadata[1558]: Mar 18 07:04:32.777 INFO Fetch successful Mar 18 07:04:32.777964 coreos-metadata[1558]: Mar 18 07:04:32.777 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 18 07:04:32.792268 coreos-metadata[1558]: Mar 18 07:04:32.792 INFO Fetch successful Mar 18 07:04:32.792268 coreos-metadata[1558]: Mar 18 07:04:32.792 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 18 07:04:32.807310 coreos-metadata[1558]: Mar 18 07:04:32.807 INFO Fetch successful Mar 18 07:04:32.807310 coreos-metadata[1558]: Mar 18 07:04:32.807 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 18 07:04:32.821345 coreos-metadata[1558]: Mar 18 07:04:32.821 INFO Fetch successful Mar 18 07:04:32.821345 coreos-metadata[1558]: Mar 18 07:04:32.821 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 18 07:04:32.836123 coreos-metadata[1558]: Mar 18 07:04:32.836 INFO Fetch successful Mar 18 07:04:32.886964 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 18 07:04:32.892376 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 18 07:04:32.986746 coreos-metadata[1645]: Mar 18 07:04:32.986 WARN failed to locate config-drive, using the metadata service API instead Mar 18 07:04:33.029218 coreos-metadata[1645]: Mar 18 07:04:33.029 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 18 07:04:33.044496 coreos-metadata[1645]: Mar 18 07:04:33.044 INFO Fetch successful Mar 18 07:04:33.044496 coreos-metadata[1645]: Mar 18 07:04:33.044 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 18 07:04:33.060389 coreos-metadata[1645]: Mar 18 07:04:33.060 INFO Fetch successful Mar 18 07:04:33.065785 unknown[1645]: wrote ssh authorized keys file for user: core Mar 18 07:04:33.115682 update-ssh-keys[1770]: Updated "/home/core/.ssh/authorized_keys" Mar 18 07:04:33.116922 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 18 07:04:33.125166 systemd[1]: Finished sshkeys.service. Mar 18 07:04:33.138904 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 18 07:04:33.139564 systemd[1]: Startup finished in 17.129s (kernel) + 11.771s (userspace) = 28.901s. Mar 18 07:04:33.667365 sshd[1751]: Accepted publickey for core from 172.24.4.1 port 35068 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:04:33.670286 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:04:33.682116 systemd-logind[1585]: New session 5 of user core. Mar 18 07:04:33.694080 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 18 07:04:34.284581 sshd[1777]: Connection closed by 172.24.4.1 port 35068 Mar 18 07:04:34.285114 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Mar 18 07:04:34.292592 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Mar 18 07:04:34.294065 systemd[1]: sshd@2-172.24.4.138:22-172.24.4.1:35068.service: Deactivated successfully. Mar 18 07:04:34.300660 systemd[1]: session-5.scope: Deactivated successfully. Mar 18 07:04:34.304665 systemd-logind[1585]: Removed session 5. Mar 18 07:04:39.271761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 18 07:04:39.283789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:04:39.590757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:04:39.603928 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 18 07:04:39.661413 kubelet[1793]: E0318 07:04:39.661324 1793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 07:04:39.668725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 07:04:39.669270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 07:04:44.300926 systemd[1]: Started sshd@3-172.24.4.138:22-172.24.4.1:45008.service - OpenSSH per-connection server daemon (172.24.4.1:45008). Mar 18 07:04:45.787078 sshd[1803]: Accepted publickey for core from 172.24.4.1 port 45008 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:04:45.789773 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:04:45.801598 systemd-logind[1585]: New session 6 of user core. Mar 18 07:04:45.808955 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 18 07:04:46.511488 sshd[1806]: Connection closed by 172.24.4.1 port 45008 Mar 18 07:04:46.511840 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Mar 18 07:04:46.526053 systemd[1]: Started sshd@4-172.24.4.138:22-172.24.4.1:45020.service - OpenSSH per-connection server daemon (172.24.4.1:45020). Mar 18 07:04:46.527412 systemd[1]: sshd@3-172.24.4.138:22-172.24.4.1:45008.service: Deactivated successfully. Mar 18 07:04:46.536179 systemd[1]: session-6.scope: Deactivated successfully. Mar 18 07:04:46.539639 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Mar 18 07:04:46.542382 systemd-logind[1585]: Removed session 6. Mar 18 07:04:47.938825 sshd[1808]: Accepted publickey for core from 172.24.4.1 port 45020 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:04:47.941244 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:04:47.952847 systemd-logind[1585]: New session 7 of user core. Mar 18 07:04:47.960923 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 18 07:04:48.643492 sshd[1814]: Connection closed by 172.24.4.1 port 45020 Mar 18 07:04:48.642322 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Mar 18 07:04:48.653115 systemd[1]: Started sshd@5-172.24.4.138:22-172.24.4.1:45022.service - OpenSSH per-connection server daemon (172.24.4.1:45022). Mar 18 07:04:48.654178 systemd[1]: sshd@4-172.24.4.138:22-172.24.4.1:45020.service: Deactivated successfully. Mar 18 07:04:48.669046 systemd[1]: session-7.scope: Deactivated successfully. Mar 18 07:04:48.675766 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Mar 18 07:04:48.679032 systemd-logind[1585]: Removed session 7. Mar 18 07:04:49.720830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 18 07:04:49.739287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:04:50.012401 sshd[1816]: Accepted publickey for core from 172.24.4.1 port 45022 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:04:50.014856 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:04:50.042194 systemd-logind[1585]: New session 8 of user core. Mar 18 07:04:50.045624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:04:50.060095 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 18 07:04:50.061886 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 18 07:04:50.136572 kubelet[1832]: E0318 07:04:50.136483 1832 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 07:04:50.141071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 07:04:50.141277 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 07:04:50.619488 sshd[1839]: Connection closed by 172.24.4.1 port 45022 Mar 18 07:04:50.619830 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Mar 18 07:04:50.632013 systemd[1]: Started sshd@6-172.24.4.138:22-172.24.4.1:45024.service - OpenSSH per-connection server daemon (172.24.4.1:45024). Mar 18 07:04:50.633093 systemd[1]: sshd@5-172.24.4.138:22-172.24.4.1:45022.service: Deactivated successfully. Mar 18 07:04:50.641901 systemd[1]: session-8.scope: Deactivated successfully. Mar 18 07:04:50.647765 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Mar 18 07:04:50.650745 systemd-logind[1585]: Removed session 8. Mar 18 07:04:51.951404 sshd[1845]: Accepted publickey for core from 172.24.4.1 port 45024 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:04:51.954120 sshd-session[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:04:51.965886 systemd-logind[1585]: New session 9 of user core. Mar 18 07:04:51.974015 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 18 07:04:52.342038 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 18 07:04:52.342730 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 18 07:04:52.363207 sudo[1852]: pam_unix(sudo:session): session closed for user root Mar 18 07:04:52.549482 sshd[1851]: Connection closed by 172.24.4.1 port 45024 Mar 18 07:04:52.549860 sshd-session[1845]: pam_unix(sshd:session): session closed for user core Mar 18 07:04:52.563084 systemd[1]: Started sshd@7-172.24.4.138:22-172.24.4.1:45040.service - OpenSSH per-connection server daemon (172.24.4.1:45040). Mar 18 07:04:52.564412 systemd[1]: sshd@6-172.24.4.138:22-172.24.4.1:45024.service: Deactivated successfully. Mar 18 07:04:52.576861 systemd[1]: session-9.scope: Deactivated successfully. Mar 18 07:04:52.580051 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Mar 18 07:04:52.583962 systemd-logind[1585]: Removed session 9. Mar 18 07:04:54.009958 sshd[1854]: Accepted publickey for core from 172.24.4.1 port 45040 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:04:54.012643 sshd-session[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:04:54.024839 systemd-logind[1585]: New session 10 of user core. Mar 18 07:04:54.028125 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 18 07:04:54.375717 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 18 07:04:54.376346 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 18 07:04:54.384216 sudo[1862]: pam_unix(sudo:session): session closed for user root Mar 18 07:04:54.394995 sudo[1861]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 18 07:04:54.395678 sudo[1861]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 18 07:04:54.423082 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 18 07:04:54.482081 augenrules[1884]: No rules Mar 18 07:04:54.483071 systemd[1]: audit-rules.service: Deactivated successfully. Mar 18 07:04:54.483647 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 18 07:04:54.486403 sudo[1861]: pam_unix(sudo:session): session closed for user root Mar 18 07:04:54.694621 sshd[1860]: Connection closed by 172.24.4.1 port 45040 Mar 18 07:04:54.696776 sshd-session[1854]: pam_unix(sshd:session): session closed for user core Mar 18 07:04:54.706020 systemd[1]: Started sshd@8-172.24.4.138:22-172.24.4.1:53512.service - OpenSSH per-connection server daemon (172.24.4.1:53512). Mar 18 07:04:54.710141 systemd[1]: sshd@7-172.24.4.138:22-172.24.4.1:45040.service: Deactivated successfully. Mar 18 07:04:54.727001 systemd[1]: session-10.scope: Deactivated successfully. Mar 18 07:04:54.731519 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Mar 18 07:04:54.735038 systemd-logind[1585]: Removed session 10. Mar 18 07:04:55.982931 sshd[1890]: Accepted publickey for core from 172.24.4.1 port 53512 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:04:55.985580 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:04:55.995589 systemd-logind[1585]: New session 11 of user core. Mar 18 07:04:56.007054 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 18 07:04:56.402562 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 18 07:04:56.403181 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 18 07:04:57.026927 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 18 07:04:57.027460 (dockerd)[1916]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 18 07:04:57.584528 dockerd[1916]: time="2025-03-18T07:04:57.583987245Z" level=info msg="Starting up" Mar 18 07:04:57.720489 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport594754224-merged.mount: Deactivated successfully. Mar 18 07:04:57.974533 dockerd[1916]: time="2025-03-18T07:04:57.974308011Z" level=info msg="Loading containers: start." Mar 18 07:04:58.199661 kernel: Initializing XFRM netlink socket Mar 18 07:04:58.292490 systemd-networkd[1210]: docker0: Link UP Mar 18 07:04:58.318799 dockerd[1916]: time="2025-03-18T07:04:58.318559720Z" level=info msg="Loading containers: done." Mar 18 07:04:58.343651 dockerd[1916]: time="2025-03-18T07:04:58.343191899Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 18 07:04:58.343651 dockerd[1916]: time="2025-03-18T07:04:58.343301699Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 18 07:04:58.343651 dockerd[1916]: time="2025-03-18T07:04:58.343411399Z" level=info msg="Daemon has completed initialization" Mar 18 07:04:58.401631 dockerd[1916]: time="2025-03-18T07:04:58.401038425Z" level=info msg="API listen on /run/docker.sock" Mar 18 07:04:58.401803 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 18 07:05:00.079188 containerd[1606]: time="2025-03-18T07:05:00.078234453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 18 07:05:00.271136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 18 07:05:00.286211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:05:00.433596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:05:00.442768 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 18 07:05:00.679542 kubelet[2120]: E0318 07:05:00.679387 2120 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 07:05:00.684604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 07:05:00.685026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 07:05:01.123223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506939329.mount: Deactivated successfully. Mar 18 07:05:03.702335 containerd[1606]: time="2025-03-18T07:05:03.702262864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:03.703600 containerd[1606]: time="2025-03-18T07:05:03.703564402Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674581" Mar 18 07:05:03.704664 containerd[1606]: time="2025-03-18T07:05:03.704600956Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:03.708032 containerd[1606]: time="2025-03-18T07:05:03.707970123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:03.709457 containerd[1606]: time="2025-03-18T07:05:03.709264457Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 3.630954139s" Mar 18 07:05:03.709457 containerd[1606]: time="2025-03-18T07:05:03.709300395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 18 07:05:03.733036 containerd[1606]: time="2025-03-18T07:05:03.732787233Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 18 07:05:06.351325 containerd[1606]: time="2025-03-18T07:05:06.351270745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:06.353032 containerd[1606]: time="2025-03-18T07:05:06.352784320Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619780" Mar 18 07:05:06.354125 containerd[1606]: time="2025-03-18T07:05:06.354063870Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:06.357502 containerd[1606]: time="2025-03-18T07:05:06.357430594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:06.359049 containerd[1606]: time="2025-03-18T07:05:06.358715935Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 2.625890881s" Mar 18 07:05:06.359049 containerd[1606]: time="2025-03-18T07:05:06.358759849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 18 07:05:06.382657 containerd[1606]: time="2025-03-18T07:05:06.382608073Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 18 07:05:07.920726 containerd[1606]: time="2025-03-18T07:05:07.920667758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:07.922194 containerd[1606]: time="2025-03-18T07:05:07.921953277Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903317" Mar 18 07:05:07.923733 containerd[1606]: time="2025-03-18T07:05:07.923677019Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:07.926924 containerd[1606]: time="2025-03-18T07:05:07.926881961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:07.928158 containerd[1606]: time="2025-03-18T07:05:07.928017987Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.545317359s" Mar 18 07:05:07.928158 containerd[1606]: time="2025-03-18T07:05:07.928063774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 18 07:05:07.950317 containerd[1606]: time="2025-03-18T07:05:07.950225423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 18 07:05:09.296049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount688320153.mount: Deactivated successfully. Mar 18 07:05:10.013618 containerd[1606]: time="2025-03-18T07:05:10.013503246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:10.016233 containerd[1606]: time="2025-03-18T07:05:10.015680820Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185380" Mar 18 07:05:10.018225 containerd[1606]: time="2025-03-18T07:05:10.018091746Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:10.023342 containerd[1606]: time="2025-03-18T07:05:10.023201303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:10.025972 containerd[1606]: time="2025-03-18T07:05:10.025102063Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 2.074815554s" Mar 18 07:05:10.025972 containerd[1606]: time="2025-03-18T07:05:10.025175442Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 18 07:05:10.075693 containerd[1606]: time="2025-03-18T07:05:10.075540727Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 18 07:05:10.712884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388705878.mount: Deactivated successfully. Mar 18 07:05:10.715259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 18 07:05:10.721610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:05:10.995701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:05:11.013093 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 18 07:05:11.159246 kubelet[2239]: E0318 07:05:11.158963 2239 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 07:05:11.161759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 07:05:11.162102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 07:05:11.252174 update_engine[1589]: I20250318 07:05:11.251493 1589 update_attempter.cc:509] Updating boot flags... Mar 18 07:05:11.296488 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2259) Mar 18 07:05:11.365488 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2261) Mar 18 07:05:12.251265 containerd[1606]: time="2025-03-18T07:05:12.251216029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:12.252960 containerd[1606]: time="2025-03-18T07:05:12.252924982Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Mar 18 07:05:12.253430 containerd[1606]: time="2025-03-18T07:05:12.253382276Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:12.257226 containerd[1606]: time="2025-03-18T07:05:12.257135827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:12.258530 containerd[1606]: time="2025-03-18T07:05:12.258370943Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.182761357s" Mar 18 07:05:12.258530 containerd[1606]: time="2025-03-18T07:05:12.258403114Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 18 07:05:12.280902 containerd[1606]: time="2025-03-18T07:05:12.280857189Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 18 07:05:12.859950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168833397.mount: Deactivated successfully. Mar 18 07:05:12.869048 containerd[1606]: time="2025-03-18T07:05:12.868929762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:12.871237 containerd[1606]: time="2025-03-18T07:05:12.871090299Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Mar 18 07:05:12.872984 containerd[1606]: time="2025-03-18T07:05:12.872860758Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:12.879231 containerd[1606]: time="2025-03-18T07:05:12.879076195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:12.882204 containerd[1606]: time="2025-03-18T07:05:12.881429056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 600.514819ms" Mar 18 07:05:12.882204 containerd[1606]: time="2025-03-18T07:05:12.881543883Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 18 07:05:12.930634 containerd[1606]: time="2025-03-18T07:05:12.930564874Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 18 07:05:13.706043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1491527062.mount: Deactivated successfully. Mar 18 07:05:16.678288 containerd[1606]: time="2025-03-18T07:05:16.678236782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:16.679526 containerd[1606]: time="2025-03-18T07:05:16.679464891Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Mar 18 07:05:16.680732 containerd[1606]: time="2025-03-18T07:05:16.680708709Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:16.684317 containerd[1606]: time="2025-03-18T07:05:16.684283970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:16.686005 containerd[1606]: time="2025-03-18T07:05:16.685979290Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.755342541s" Mar 18 07:05:16.686104 containerd[1606]: time="2025-03-18T07:05:16.686088786Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 18 07:05:21.279373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 18 07:05:21.289760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:05:21.591622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:05:21.593574 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 18 07:05:21.642785 kubelet[2424]: E0318 07:05:21.640971 2424 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 07:05:21.645580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 07:05:21.645751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 07:05:21.658727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:05:21.665703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:05:21.694747 systemd[1]: Reloading requested from client PID 2441 ('systemctl') (unit session-11.scope)... Mar 18 07:05:21.694764 systemd[1]: Reloading... Mar 18 07:05:21.777909 zram_generator::config[2479]: No configuration found. Mar 18 07:05:22.106399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 18 07:05:22.180960 systemd[1]: Reloading finished in 485 ms. Mar 18 07:05:22.222844 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 18 07:05:22.222922 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 18 07:05:22.223294 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:05:22.231827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:05:22.328562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:05:22.337732 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 18 07:05:22.407266 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 07:05:22.407266 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 07:05:22.407266 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 07:05:22.407742 kubelet[2556]: I0318 07:05:22.407235 2556 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 07:05:22.811869 kubelet[2556]: I0318 07:05:22.811809 2556 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 18 07:05:22.811869 kubelet[2556]: I0318 07:05:22.811836 2556 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 07:05:22.812144 kubelet[2556]: I0318 07:05:22.812076 2556 server.go:927] "Client rotation is on, will bootstrap in background" Mar 18 07:05:22.831922 kubelet[2556]: I0318 07:05:22.831832 2556 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 18 07:05:22.835481 kubelet[2556]: E0318 07:05:22.835138 2556 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:22.854488 kubelet[2556]: I0318 07:05:22.853428 2556 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 18 07:05:22.855196 kubelet[2556]: I0318 07:05:22.855141 2556 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 07:05:22.856116 kubelet[2556]: I0318 07:05:22.855347 2556 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-a-a1f36745dc.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 18 07:05:22.856549 kubelet[2556]: I0318 07:05:22.856427 2556 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 07:05:22.858879 kubelet[2556]: I0318 07:05:22.856687 2556 container_manager_linux.go:301] "Creating device plugin manager" Mar 18 07:05:22.859089 kubelet[2556]: I0318 07:05:22.859062 2556 state_mem.go:36] "Initialized new in-memory state store" Mar 18 07:05:22.860592 kubelet[2556]: I0318 07:05:22.860561 2556 kubelet.go:400] "Attempting to sync node with API server" Mar 18 07:05:22.860592 kubelet[2556]: I0318 07:05:22.860592 2556 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 07:05:22.860731 kubelet[2556]: I0318 07:05:22.860617 2556 kubelet.go:312] "Adding apiserver pod source" Mar 18 07:05:22.860731 kubelet[2556]: I0318 07:05:22.860637 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 07:05:22.866620 kubelet[2556]: W0318 07:05:22.866555 2556 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:22.866620 kubelet[2556]: E0318 07:05:22.866614 2556 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:22.866801 kubelet[2556]: W0318 07:05:22.866682 2556 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-a-a1f36745dc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:22.866801 kubelet[2556]: E0318 07:05:22.866717 2556 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-a-a1f36745dc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:22.868476 kubelet[2556]: I0318 07:05:22.867149 2556 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 18 07:05:22.869194 kubelet[2556]: I0318 07:05:22.869149 2556 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 07:05:22.869278 kubelet[2556]: W0318 07:05:22.869227 2556 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 18 07:05:22.870144 kubelet[2556]: I0318 07:05:22.870111 2556 server.go:1264] "Started kubelet" Mar 18 07:05:22.880924 kubelet[2556]: I0318 07:05:22.880888 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 07:05:22.881738 kubelet[2556]: E0318 07:05:22.881504 2556 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.138:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.138:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-2-a-a1f36745dc.novalocal.182dd3d06565fca5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-2-a-a1f36745dc.novalocal,UID:ci-4152-2-2-a-a1f36745dc.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-a-a1f36745dc.novalocal,},FirstTimestamp:2025-03-18 07:05:22.870090917 +0000 UTC m=+0.525354110,LastTimestamp:2025-03-18 07:05:22.870090917 +0000 UTC m=+0.525354110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-a-a1f36745dc.novalocal,}" Mar 18 07:05:22.886152 kubelet[2556]: I0318 07:05:22.886078 2556 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 07:05:22.888257 kubelet[2556]: I0318 07:05:22.887013 2556 server.go:455] "Adding debug handlers to kubelet server" Mar 18 07:05:22.888257 kubelet[2556]: I0318 07:05:22.887918 2556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 07:05:22.888257 kubelet[2556]: I0318 07:05:22.888149 2556 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 07:05:22.890872 kubelet[2556]: I0318 07:05:22.889502 2556 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 18 07:05:22.891371 kubelet[2556]: E0318 07:05:22.891311 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-a-a1f36745dc.novalocal?timeout=10s\": dial tcp 172.24.4.138:6443: connect: connection refused" interval="200ms" Mar 18 07:05:22.891692 kubelet[2556]: I0318 07:05:22.891666 2556 reconciler.go:26] "Reconciler: start to sync state" Mar 18 07:05:22.891901 kubelet[2556]: I0318 07:05:22.891874 2556 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 18 07:05:22.892733 kubelet[2556]: W0318 07:05:22.892655 2556 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:22.892927 kubelet[2556]: E0318 07:05:22.892902 2556 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:22.893495 kubelet[2556]: I0318 07:05:22.893420 2556 factory.go:221] Registration of the systemd container factory successfully Mar 18 07:05:22.893818 kubelet[2556]: I0318 07:05:22.893779 2556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 18 07:05:22.895888 kubelet[2556]: I0318 07:05:22.895858 2556 factory.go:221] Registration of the containerd container factory successfully Mar 18 07:05:22.904678 kubelet[2556]: I0318 07:05:22.904624 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 07:05:22.906377 kubelet[2556]: I0318 07:05:22.905482 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 07:05:22.906377 kubelet[2556]: I0318 07:05:22.905508 2556 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 07:05:22.906377 kubelet[2556]: I0318 07:05:22.905531 2556 kubelet.go:2337] "Starting kubelet main sync loop" Mar 18 07:05:22.906377 kubelet[2556]: E0318 07:05:22.905582 2556 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 07:05:22.911339 kubelet[2556]: W0318 07:05:22.911278 2556 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:22.911339 kubelet[2556]: E0318 07:05:22.911329 2556 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:22.912769 kubelet[2556]: E0318 07:05:22.912738 2556 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 18 07:05:22.934197 kubelet[2556]: I0318 07:05:22.934166 2556 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 18 07:05:22.934334 kubelet[2556]: I0318 07:05:22.934324 2556 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 18 07:05:22.934389 kubelet[2556]: I0318 07:05:22.934382 2556 state_mem.go:36] "Initialized new in-memory state store" Mar 18 07:05:22.938979 kubelet[2556]: I0318 07:05:22.938967 2556 policy_none.go:49] "None policy: Start" Mar 18 07:05:22.939506 kubelet[2556]: I0318 07:05:22.939490 2556 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 07:05:22.939638 kubelet[2556]: I0318 07:05:22.939630 2556 state_mem.go:35] "Initializing new in-memory state store" Mar 18 07:05:22.955455 kubelet[2556]: I0318 07:05:22.954026 2556 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 07:05:22.955455 kubelet[2556]: I0318 07:05:22.954210 2556 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 07:05:22.955625 kubelet[2556]: I0318 07:05:22.955614 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 07:05:22.965873 kubelet[2556]: E0318 07:05:22.965732 2556 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-2-a-a1f36745dc.novalocal\" not found" Mar 18 07:05:22.991626 kubelet[2556]: I0318 07:05:22.991609 2556 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:22.992151 kubelet[2556]: E0318 07:05:22.992130 2556 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.138:6443/api/v1/nodes\": dial tcp 172.24.4.138:6443: connect: connection refused" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.006553 kubelet[2556]: I0318 07:05:23.006530 2556 topology_manager.go:215] "Topology Admit Handler" podUID="02f804660c91a76e4752f8484926a7aa" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.008452 kubelet[2556]: I0318 07:05:23.008358 2556 topology_manager.go:215] "Topology Admit Handler" podUID="bc2b356c0630ada81ed6cb4eee7bac51" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.010027 kubelet[2556]: I0318 07:05:23.010011 2556 topology_manager.go:215] "Topology Admit Handler" podUID="6e1e6e303fdc28286dde2255b9df8c59" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.093061 kubelet[2556]: E0318 07:05:23.092862 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-a-a1f36745dc.novalocal?timeout=10s\": dial tcp 172.24.4.138:6443: connect: connection refused" interval="400ms" Mar 18 07:05:23.194509 kubelet[2556]: I0318 07:05:23.194415 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02f804660c91a76e4752f8484926a7aa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"02f804660c91a76e4752f8484926a7aa\") " pod="kube-system/kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.194965 kubelet[2556]: I0318 07:05:23.194927 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.195326 kubelet[2556]: I0318 07:05:23.195291 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.195627 kubelet[2556]: I0318 07:05:23.195586 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.196128 kubelet[2556]: I0318 07:05:23.195777 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e1e6e303fdc28286dde2255b9df8c59-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"6e1e6e303fdc28286dde2255b9df8c59\") " pod="kube-system/kube-scheduler-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.196128 kubelet[2556]: I0318 07:05:23.195833 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02f804660c91a76e4752f8484926a7aa-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"02f804660c91a76e4752f8484926a7aa\") " pod="kube-system/kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.196128 kubelet[2556]: I0318 07:05:23.195874 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02f804660c91a76e4752f8484926a7aa-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"02f804660c91a76e4752f8484926a7aa\") " pod="kube-system/kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.196128 kubelet[2556]: I0318 07:05:23.195920 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.196128 kubelet[2556]: I0318 07:05:23.195968 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.197530 kubelet[2556]: I0318 07:05:23.197180 2556 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.198224 kubelet[2556]: E0318 07:05:23.198178 2556 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.138:6443/api/v1/nodes\": dial tcp 172.24.4.138:6443: connect: connection refused" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.316566 containerd[1606]: time="2025-03-18T07:05:23.315284358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal,Uid:02f804660c91a76e4752f8484926a7aa,Namespace:kube-system,Attempt:0,}" Mar 18 07:05:23.317600 containerd[1606]: time="2025-03-18T07:05:23.317496054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal,Uid:bc2b356c0630ada81ed6cb4eee7bac51,Namespace:kube-system,Attempt:0,}" Mar 18 07:05:23.321029 containerd[1606]: time="2025-03-18T07:05:23.320972723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-a-a1f36745dc.novalocal,Uid:6e1e6e303fdc28286dde2255b9df8c59,Namespace:kube-system,Attempt:0,}" Mar 18 07:05:23.494950 kubelet[2556]: E0318 07:05:23.494771 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-a-a1f36745dc.novalocal?timeout=10s\": dial tcp 172.24.4.138:6443: connect: connection refused" interval="800ms" Mar 18 07:05:23.602741 kubelet[2556]: I0318 07:05:23.601987 2556 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.602741 kubelet[2556]: E0318 07:05:23.602561 2556 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.138:6443/api/v1/nodes\": dial tcp 172.24.4.138:6443: connect: connection refused" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:23.773778 kubelet[2556]: W0318 07:05:23.773678 2556 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:23.774000 kubelet[2556]: E0318 07:05:23.773800 2556 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:23.877946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568135185.mount: Deactivated successfully. Mar 18 07:05:23.886489 containerd[1606]: time="2025-03-18T07:05:23.885108538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 18 07:05:23.893181 containerd[1606]: time="2025-03-18T07:05:23.893108846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 18 07:05:23.894411 containerd[1606]: time="2025-03-18T07:05:23.894325106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 18 07:05:23.897399 containerd[1606]: time="2025-03-18T07:05:23.897321530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 18 07:05:23.899568 containerd[1606]: time="2025-03-18T07:05:23.899507286Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 18 07:05:23.900949 containerd[1606]: time="2025-03-18T07:05:23.900850877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 18 07:05:23.904480 containerd[1606]: time="2025-03-18T07:05:23.902750625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 18 07:05:23.912240 containerd[1606]: time="2025-03-18T07:05:23.912189212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 18 07:05:23.916172 containerd[1606]: time="2025-03-18T07:05:23.916081473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 600.553416ms" Mar 18 07:05:23.924560 containerd[1606]: time="2025-03-18T07:05:23.924513043Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 603.213135ms" Mar 18 07:05:23.925197 kubelet[2556]: W0318 07:05:23.925151 2556 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-a-a1f36745dc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:23.925431 kubelet[2556]: E0318 07:05:23.925386 2556 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-a-a1f36745dc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:23.925431 kubelet[2556]: W0318 07:05:23.925329 2556 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:23.925431 kubelet[2556]: E0318 07:05:23.925411 2556 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.138:6443: connect: connection refused Mar 18 07:05:23.930297 containerd[1606]: time="2025-03-18T07:05:23.930239028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.557645ms" Mar 18 07:05:24.126128 containerd[1606]: time="2025-03-18T07:05:24.125177090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 07:05:24.126128 containerd[1606]: time="2025-03-18T07:05:24.125774985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 07:05:24.126128 containerd[1606]: time="2025-03-18T07:05:24.125792449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:24.126128 containerd[1606]: time="2025-03-18T07:05:24.125866267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:24.138009 containerd[1606]: time="2025-03-18T07:05:24.137655756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 07:05:24.138009 containerd[1606]: time="2025-03-18T07:05:24.137716410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 07:05:24.138009 containerd[1606]: time="2025-03-18T07:05:24.137735918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:24.138009 containerd[1606]: time="2025-03-18T07:05:24.137907040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:24.141899 containerd[1606]: time="2025-03-18T07:05:24.141667711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 07:05:24.141899 containerd[1606]: time="2025-03-18T07:05:24.141727373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 07:05:24.141899 containerd[1606]: time="2025-03-18T07:05:24.141746098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:24.141899 containerd[1606]: time="2025-03-18T07:05:24.141825397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:24.213941 containerd[1606]: time="2025-03-18T07:05:24.213898647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal,Uid:02f804660c91a76e4752f8484926a7aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"68773064f6591421d7a1c684c0d5b5b297e8eb32f14909a2b427e621e3e7ab08\"" Mar 18 07:05:24.220496 containerd[1606]: time="2025-03-18T07:05:24.219210138Z" level=info msg="CreateContainer within sandbox \"68773064f6591421d7a1c684c0d5b5b297e8eb32f14909a2b427e621e3e7ab08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 18 07:05:24.248742 containerd[1606]: time="2025-03-18T07:05:24.248250670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal,Uid:bc2b356c0630ada81ed6cb4eee7bac51,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfba4c9f65b64b65b91b53cdd341ace90df29ec2cb0c7c0b513cf9c05c0723bd\"" Mar 18 07:05:24.251532 containerd[1606]: time="2025-03-18T07:05:24.251505479Z" level=info msg="CreateContainer within sandbox \"dfba4c9f65b64b65b91b53cdd341ace90df29ec2cb0c7c0b513cf9c05c0723bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 18 07:05:24.255488 containerd[1606]: time="2025-03-18T07:05:24.255458061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-a-a1f36745dc.novalocal,Uid:6e1e6e303fdc28286dde2255b9df8c59,Namespace:kube-system,Attempt:0,} returns sandbox id \"e27a3925085562b2ebdad14ccaae7981651cb43b4d2d61def2cf252ae6b6fa09\"" Mar 18 07:05:24.256549 containerd[1606]: time="2025-03-18T07:05:24.256480617Z" level=info msg="CreateContainer within sandbox \"68773064f6591421d7a1c684c0d5b5b297e8eb32f14909a2b427e621e3e7ab08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"066e2ebdc6cb1008594a8263c962af60366147da647ebd21052b63dc0198bef5\"" Mar 18 07:05:24.257385 containerd[1606]: time="2025-03-18T07:05:24.257350414Z" level=info msg="StartContainer for \"066e2ebdc6cb1008594a8263c962af60366147da647ebd21052b63dc0198bef5\"" Mar 18 07:05:24.259460 containerd[1606]: time="2025-03-18T07:05:24.259402669Z" level=info msg="CreateContainer within sandbox \"e27a3925085562b2ebdad14ccaae7981651cb43b4d2d61def2cf252ae6b6fa09\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 18 07:05:24.279642 containerd[1606]: time="2025-03-18T07:05:24.279566135Z" level=info msg="CreateContainer within sandbox \"dfba4c9f65b64b65b91b53cdd341ace90df29ec2cb0c7c0b513cf9c05c0723bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc81a5bb2b5daa80a26a5a10a76ca9d590f20f4479fa0e77f9a3a52f4a6c228b\"" Mar 18 07:05:24.280845 containerd[1606]: time="2025-03-18T07:05:24.280822991Z" level=info msg="StartContainer for \"dc81a5bb2b5daa80a26a5a10a76ca9d590f20f4479fa0e77f9a3a52f4a6c228b\"" Mar 18 07:05:24.290376 containerd[1606]: time="2025-03-18T07:05:24.290240594Z" level=info msg="CreateContainer within sandbox \"e27a3925085562b2ebdad14ccaae7981651cb43b4d2d61def2cf252ae6b6fa09\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a3c0138fe390316071e549d3125d40b0a4429090791f8e0a11a3adc92881a732\"" Mar 18 07:05:24.292635 containerd[1606]: time="2025-03-18T07:05:24.292574519Z" level=info msg="StartContainer for \"a3c0138fe390316071e549d3125d40b0a4429090791f8e0a11a3adc92881a732\"" Mar 18 07:05:24.297161 kubelet[2556]: E0318 07:05:24.297024 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-a-a1f36745dc.novalocal?timeout=10s\": dial tcp 172.24.4.138:6443: connect: connection refused" interval="1.6s" Mar 18 07:05:24.364887 containerd[1606]: time="2025-03-18T07:05:24.364828339Z" level=info msg="StartContainer for \"066e2ebdc6cb1008594a8263c962af60366147da647ebd21052b63dc0198bef5\" returns successfully" Mar 18 07:05:24.391795 containerd[1606]: time="2025-03-18T07:05:24.391701089Z" level=info msg="StartContainer for \"a3c0138fe390316071e549d3125d40b0a4429090791f8e0a11a3adc92881a732\" returns successfully" Mar 18 07:05:24.408093 kubelet[2556]: I0318 07:05:24.407890 2556 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:24.410184 kubelet[2556]: E0318 07:05:24.410132 2556 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.138:6443/api/v1/nodes\": dial tcp 172.24.4.138:6443: connect: connection refused" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:24.430122 containerd[1606]: time="2025-03-18T07:05:24.429820465Z" level=info msg="StartContainer for \"dc81a5bb2b5daa80a26a5a10a76ca9d590f20f4479fa0e77f9a3a52f4a6c228b\" returns successfully" Mar 18 07:05:26.013971 kubelet[2556]: I0318 07:05:26.013683 2556 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:26.258152 kubelet[2556]: E0318 07:05:26.258083 2556 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-2-a-a1f36745dc.novalocal\" not found" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:26.317846 kubelet[2556]: I0318 07:05:26.317134 2556 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:26.869203 kubelet[2556]: I0318 07:05:26.869173 2556 apiserver.go:52] "Watching apiserver" Mar 18 07:05:26.894451 kubelet[2556]: I0318 07:05:26.892499 2556 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 18 07:05:28.551565 systemd[1]: Reloading requested from client PID 2828 ('systemctl') (unit session-11.scope)... Mar 18 07:05:28.552196 systemd[1]: Reloading... Mar 18 07:05:28.675484 zram_generator::config[2873]: No configuration found. Mar 18 07:05:28.815488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 18 07:05:28.954168 systemd[1]: Reloading finished in 401 ms. Mar 18 07:05:28.990504 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:05:28.997278 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 07:05:28.997632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:05:29.008911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 18 07:05:29.233306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 18 07:05:29.239984 (kubelet)[2941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 18 07:05:29.295635 kubelet[2941]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 07:05:29.295635 kubelet[2941]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 07:05:29.295635 kubelet[2941]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 07:05:29.296369 kubelet[2941]: I0318 07:05:29.295666 2941 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 07:05:29.305542 kubelet[2941]: I0318 07:05:29.304374 2941 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 18 07:05:29.305542 kubelet[2941]: I0318 07:05:29.304419 2941 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 07:05:29.305542 kubelet[2941]: I0318 07:05:29.304938 2941 server.go:927] "Client rotation is on, will bootstrap in background" Mar 18 07:05:29.309076 kubelet[2941]: I0318 07:05:29.309053 2941 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 07:05:29.311391 kubelet[2941]: I0318 07:05:29.311367 2941 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 18 07:05:29.319722 kubelet[2941]: I0318 07:05:29.319686 2941 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 18 07:05:29.320327 kubelet[2941]: I0318 07:05:29.320303 2941 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 07:05:29.320607 kubelet[2941]: I0318 07:05:29.320388 2941 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-a-a1f36745dc.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 18 07:05:29.320735 kubelet[2941]: I0318 07:05:29.320724 2941 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 07:05:29.320793 kubelet[2941]: I0318 07:05:29.320786 2941 container_manager_linux.go:301] "Creating device plugin manager" Mar 18 07:05:29.320876 kubelet[2941]: I0318 07:05:29.320867 2941 state_mem.go:36] "Initialized new in-memory state store" Mar 18 07:05:29.321006 kubelet[2941]: I0318 07:05:29.320997 2941 kubelet.go:400] "Attempting to sync node with API server" Mar 18 07:05:29.321396 kubelet[2941]: I0318 07:05:29.321385 2941 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 07:05:29.321492 kubelet[2941]: I0318 07:05:29.321482 2941 kubelet.go:312] "Adding apiserver pod source" Mar 18 07:05:29.321558 kubelet[2941]: I0318 07:05:29.321550 2941 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 07:05:29.322928 kubelet[2941]: I0318 07:05:29.322913 2941 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 18 07:05:29.324746 kubelet[2941]: I0318 07:05:29.324729 2941 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 07:05:29.346264 kubelet[2941]: I0318 07:05:29.346232 2941 server.go:1264] "Started kubelet" Mar 18 07:05:29.346607 kubelet[2941]: I0318 07:05:29.346556 2941 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 07:05:29.346679 kubelet[2941]: I0318 07:05:29.346633 2941 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 07:05:29.346909 kubelet[2941]: I0318 07:05:29.346886 2941 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 07:05:29.350573 kubelet[2941]: I0318 07:05:29.348834 2941 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 07:05:29.351829 kubelet[2941]: I0318 07:05:29.351813 2941 server.go:455] "Adding debug handlers to kubelet server" Mar 18 07:05:29.355258 kubelet[2941]: I0318 07:05:29.354758 2941 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 18 07:05:29.358373 kubelet[2941]: I0318 07:05:29.358354 2941 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 18 07:05:29.358646 kubelet[2941]: I0318 07:05:29.358629 2941 reconciler.go:26] "Reconciler: start to sync state" Mar 18 07:05:29.360808 kubelet[2941]: I0318 07:05:29.360784 2941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 07:05:29.362517 kubelet[2941]: I0318 07:05:29.362294 2941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 07:05:29.362517 kubelet[2941]: I0318 07:05:29.362318 2941 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 07:05:29.362517 kubelet[2941]: I0318 07:05:29.362332 2941 kubelet.go:2337] "Starting kubelet main sync loop" Mar 18 07:05:29.362517 kubelet[2941]: E0318 07:05:29.362392 2941 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 07:05:29.364991 kubelet[2941]: E0318 07:05:29.364862 2941 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 18 07:05:29.366380 kubelet[2941]: I0318 07:05:29.366112 2941 factory.go:221] Registration of the containerd container factory successfully Mar 18 07:05:29.366380 kubelet[2941]: I0318 07:05:29.366128 2941 factory.go:221] Registration of the systemd container factory successfully Mar 18 07:05:29.366380 kubelet[2941]: I0318 07:05:29.366208 2941 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 18 07:05:29.424138 kubelet[2941]: I0318 07:05:29.424116 2941 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 18 07:05:29.424276 kubelet[2941]: I0318 07:05:29.424265 2941 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 18 07:05:29.424485 kubelet[2941]: I0318 07:05:29.424326 2941 state_mem.go:36] "Initialized new in-memory state store" Mar 18 07:05:29.424572 kubelet[2941]: I0318 07:05:29.424560 2941 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 07:05:29.424636 kubelet[2941]: I0318 07:05:29.424616 2941 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 07:05:29.424683 kubelet[2941]: I0318 07:05:29.424677 2941 policy_none.go:49] "None policy: Start" Mar 18 07:05:29.425257 kubelet[2941]: I0318 07:05:29.425239 2941 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 07:05:29.425318 kubelet[2941]: I0318 07:05:29.425267 2941 state_mem.go:35] "Initializing new in-memory state store" Mar 18 07:05:29.425500 kubelet[2941]: I0318 07:05:29.425431 2941 state_mem.go:75] "Updated machine memory state" Mar 18 07:05:29.426490 kubelet[2941]: I0318 07:05:29.426411 2941 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 07:05:29.426609 kubelet[2941]: I0318 07:05:29.426575 2941 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 07:05:29.427473 kubelet[2941]: I0318 07:05:29.426659 2941 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 07:05:29.458356 kubelet[2941]: I0318 07:05:29.458325 2941 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.462795 kubelet[2941]: I0318 07:05:29.462753 2941 topology_manager.go:215] "Topology Admit Handler" podUID="02f804660c91a76e4752f8484926a7aa" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.462896 kubelet[2941]: I0318 07:05:29.462846 2941 topology_manager.go:215] "Topology Admit Handler" podUID="bc2b356c0630ada81ed6cb4eee7bac51" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.462896 kubelet[2941]: I0318 07:05:29.462878 2941 topology_manager.go:215] "Topology Admit Handler" podUID="6e1e6e303fdc28286dde2255b9df8c59" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.470514 kubelet[2941]: I0318 07:05:29.470481 2941 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.470806 kubelet[2941]: I0318 07:05:29.470636 2941 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.480582 kubelet[2941]: W0318 07:05:29.480471 2941 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 18 07:05:29.480707 kubelet[2941]: W0318 07:05:29.480642 2941 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 18 07:05:29.482068 kubelet[2941]: W0318 07:05:29.482035 2941 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 18 07:05:29.512527 sudo[2971]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 18 07:05:29.512838 sudo[2971]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 18 07:05:29.560272 kubelet[2941]: I0318 07:05:29.560226 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02f804660c91a76e4752f8484926a7aa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"02f804660c91a76e4752f8484926a7aa\") " pod="kube-system/kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.560398 kubelet[2941]: I0318 07:05:29.560313 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.560398 kubelet[2941]: I0318 07:05:29.560343 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.560741 kubelet[2941]: I0318 07:05:29.560707 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.560741 kubelet[2941]: I0318 07:05:29.560737 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e1e6e303fdc28286dde2255b9df8c59-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"6e1e6e303fdc28286dde2255b9df8c59\") " pod="kube-system/kube-scheduler-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.560804 kubelet[2941]: I0318 07:05:29.560756 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02f804660c91a76e4752f8484926a7aa-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"02f804660c91a76e4752f8484926a7aa\") " pod="kube-system/kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.560804 kubelet[2941]: I0318 07:05:29.560775 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02f804660c91a76e4752f8484926a7aa-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"02f804660c91a76e4752f8484926a7aa\") " pod="kube-system/kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.560804 kubelet[2941]: I0318 07:05:29.560793 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:29.560878 kubelet[2941]: I0318 07:05:29.560812 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc2b356c0630ada81ed6cb4eee7bac51-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal\" (UID: \"bc2b356c0630ada81ed6cb4eee7bac51\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:30.109412 sudo[2971]: pam_unix(sudo:session): session closed for user root Mar 18 07:05:30.325509 kubelet[2941]: I0318 07:05:30.324563 2941 apiserver.go:52] "Watching apiserver" Mar 18 07:05:30.358963 kubelet[2941]: I0318 07:05:30.358908 2941 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 18 07:05:30.404809 kubelet[2941]: W0318 07:05:30.404717 2941 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 18 07:05:30.405396 kubelet[2941]: E0318 07:05:30.404990 2941 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" Mar 18 07:05:30.438735 kubelet[2941]: I0318 07:05:30.438654 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-2-a-a1f36745dc.novalocal" podStartSLOduration=1.4386261519999999 podStartE2EDuration="1.438626152s" podCreationTimestamp="2025-03-18 07:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 07:05:30.427870905 +0000 UTC m=+1.184204824" watchObservedRunningTime="2025-03-18 07:05:30.438626152 +0000 UTC m=+1.194960101" Mar 18 07:05:30.439889 kubelet[2941]: I0318 07:05:30.439091 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-2-a-a1f36745dc.novalocal" podStartSLOduration=1.4390773000000001 podStartE2EDuration="1.4390773s" podCreationTimestamp="2025-03-18 07:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 07:05:30.438318152 +0000 UTC m=+1.194652101" watchObservedRunningTime="2025-03-18 07:05:30.4390773 +0000 UTC m=+1.195411249" Mar 18 07:05:30.473500 kubelet[2941]: I0318 07:05:30.473415 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-2-a-a1f36745dc.novalocal" podStartSLOduration=1.473388985 podStartE2EDuration="1.473388985s" podCreationTimestamp="2025-03-18 07:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 07:05:30.458867857 +0000 UTC m=+1.215201806" watchObservedRunningTime="2025-03-18 07:05:30.473388985 +0000 UTC m=+1.229722914" Mar 18 07:05:32.558478 sudo[1897]: pam_unix(sudo:session): session closed for user root Mar 18 07:05:32.834525 sshd[1896]: Connection closed by 172.24.4.1 port 53512 Mar 18 07:05:32.835584 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Mar 18 07:05:32.841056 systemd[1]: sshd@8-172.24.4.138:22-172.24.4.1:53512.service: Deactivated successfully. Mar 18 07:05:32.849684 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Mar 18 07:05:32.851723 systemd[1]: session-11.scope: Deactivated successfully. Mar 18 07:05:32.855848 systemd-logind[1585]: Removed session 11. Mar 18 07:05:43.272585 kubelet[2941]: I0318 07:05:43.272483 2941 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 18 07:05:43.273309 containerd[1606]: time="2025-03-18T07:05:43.272911693Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 18 07:05:43.273870 kubelet[2941]: I0318 07:05:43.273690 2941 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 18 07:05:44.011096 kubelet[2941]: I0318 07:05:44.010947 2941 topology_manager.go:215] "Topology Admit Handler" podUID="f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa" podNamespace="kube-system" podName="kube-proxy-nrkpx" Mar 18 07:05:44.046929 kubelet[2941]: I0318 07:05:44.046859 2941 topology_manager.go:215] "Topology Admit Handler" podUID="a0d6ced7-34d3-49b7-8050-a8ef1b64a013" podNamespace="kube-system" podName="cilium-tthpk" Mar 18 07:05:44.054476 kubelet[2941]: I0318 07:05:44.052202 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cni-path\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.055296 kubelet[2941]: I0318 07:05:44.055256 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa-lib-modules\") pod \"kube-proxy-nrkpx\" (UID: \"f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa\") " pod="kube-system/kube-proxy-nrkpx" Mar 18 07:05:44.055769 kubelet[2941]: I0318 07:05:44.055632 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-xtables-lock\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.058467 kubelet[2941]: I0318 07:05:44.056255 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-run\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.058811 kubelet[2941]: I0318 07:05:44.058632 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-bpf-maps\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.059140 kubelet[2941]: I0318 07:05:44.058978 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-hostproc\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.060194 kubelet[2941]: I0318 07:05:44.060012 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-etc-cni-netd\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.060519 kubelet[2941]: I0318 07:05:44.060143 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-host-proc-sys-net\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.062659 kubelet[2941]: I0318 07:05:44.062332 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-hubble-tls\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.062659 kubelet[2941]: I0318 07:05:44.062529 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-clustermesh-secrets\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.062659 kubelet[2941]: I0318 07:05:44.062600 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa-xtables-lock\") pod \"kube-proxy-nrkpx\" (UID: \"f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa\") " pod="kube-system/kube-proxy-nrkpx" Mar 18 07:05:44.063033 kubelet[2941]: I0318 07:05:44.062635 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-cgroup\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.063033 kubelet[2941]: I0318 07:05:44.062907 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-config-path\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.063033 kubelet[2941]: I0318 07:05:44.062977 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-host-proc-sys-kernel\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.063408 kubelet[2941]: I0318 07:05:44.063227 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa-kube-proxy\") pod \"kube-proxy-nrkpx\" (UID: \"f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa\") " pod="kube-system/kube-proxy-nrkpx" Mar 18 07:05:44.063408 kubelet[2941]: I0318 07:05:44.063283 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2694v\" (UniqueName: \"kubernetes.io/projected/f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa-kube-api-access-2694v\") pod \"kube-proxy-nrkpx\" (UID: \"f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa\") " pod="kube-system/kube-proxy-nrkpx" Mar 18 07:05:44.063408 kubelet[2941]: I0318 07:05:44.063354 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-lib-modules\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.065537 kubelet[2941]: I0318 07:05:44.063663 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tl8x\" (UniqueName: \"kubernetes.io/projected/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-kube-api-access-9tl8x\") pod \"cilium-tthpk\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " pod="kube-system/cilium-tthpk" Mar 18 07:05:44.331630 kubelet[2941]: I0318 07:05:44.328969 2941 topology_manager.go:215] "Topology Admit Handler" podUID="bb36754b-5416-4dcd-8180-7b7a9314c8a3" podNamespace="kube-system" podName="cilium-operator-599987898-gr4pn" Mar 18 07:05:44.343587 containerd[1606]: time="2025-03-18T07:05:44.343159793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrkpx,Uid:f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa,Namespace:kube-system,Attempt:0,}" Mar 18 07:05:44.368521 kubelet[2941]: I0318 07:05:44.368483 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb36754b-5416-4dcd-8180-7b7a9314c8a3-cilium-config-path\") pod \"cilium-operator-599987898-gr4pn\" (UID: \"bb36754b-5416-4dcd-8180-7b7a9314c8a3\") " pod="kube-system/cilium-operator-599987898-gr4pn" Mar 18 07:05:44.368521 kubelet[2941]: I0318 07:05:44.368524 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6n6z\" (UniqueName: \"kubernetes.io/projected/bb36754b-5416-4dcd-8180-7b7a9314c8a3-kube-api-access-t6n6z\") pod \"cilium-operator-599987898-gr4pn\" (UID: \"bb36754b-5416-4dcd-8180-7b7a9314c8a3\") " pod="kube-system/cilium-operator-599987898-gr4pn" Mar 18 07:05:44.371840 containerd[1606]: time="2025-03-18T07:05:44.371798634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tthpk,Uid:a0d6ced7-34d3-49b7-8050-a8ef1b64a013,Namespace:kube-system,Attempt:0,}" Mar 18 07:05:44.398417 containerd[1606]: time="2025-03-18T07:05:44.398217287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 07:05:44.398725 containerd[1606]: time="2025-03-18T07:05:44.398276318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 07:05:44.399210 containerd[1606]: time="2025-03-18T07:05:44.399003433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:44.399210 containerd[1606]: time="2025-03-18T07:05:44.399087501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:44.408829 containerd[1606]: time="2025-03-18T07:05:44.408599029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 07:05:44.408829 containerd[1606]: time="2025-03-18T07:05:44.408659272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 07:05:44.408829 containerd[1606]: time="2025-03-18T07:05:44.408681163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:44.408829 containerd[1606]: time="2025-03-18T07:05:44.408775300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:44.456710 containerd[1606]: time="2025-03-18T07:05:44.456599402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrkpx,Uid:f880d21d-eb9c-4bbe-b2d9-20ab5344f2aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed2798893cf0104419ab953b48899e03973a528757ff6fc37b9f45f26150cd8b\"" Mar 18 07:05:44.462676 containerd[1606]: time="2025-03-18T07:05:44.462595790Z" level=info msg="CreateContainer within sandbox \"ed2798893cf0104419ab953b48899e03973a528757ff6fc37b9f45f26150cd8b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 18 07:05:44.463820 containerd[1606]: time="2025-03-18T07:05:44.463789843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tthpk,Uid:a0d6ced7-34d3-49b7-8050-a8ef1b64a013,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\"" Mar 18 07:05:44.468179 containerd[1606]: time="2025-03-18T07:05:44.468131003Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 18 07:05:44.498727 containerd[1606]: time="2025-03-18T07:05:44.498670583Z" level=info msg="CreateContainer within sandbox \"ed2798893cf0104419ab953b48899e03973a528757ff6fc37b9f45f26150cd8b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8aeaa289031fb103cae0d28d4313325270756bbd6baf480b56e5361d2436cf94\"" Mar 18 07:05:44.499381 containerd[1606]: time="2025-03-18T07:05:44.499358154Z" level=info msg="StartContainer for \"8aeaa289031fb103cae0d28d4313325270756bbd6baf480b56e5361d2436cf94\"" Mar 18 07:05:44.564856 containerd[1606]: time="2025-03-18T07:05:44.564820292Z" level=info msg="StartContainer for \"8aeaa289031fb103cae0d28d4313325270756bbd6baf480b56e5361d2436cf94\" returns successfully" Mar 18 07:05:44.635134 containerd[1606]: time="2025-03-18T07:05:44.634793410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gr4pn,Uid:bb36754b-5416-4dcd-8180-7b7a9314c8a3,Namespace:kube-system,Attempt:0,}" Mar 18 07:05:44.670085 containerd[1606]: time="2025-03-18T07:05:44.669794466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 07:05:44.670085 containerd[1606]: time="2025-03-18T07:05:44.669852185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 07:05:44.670085 containerd[1606]: time="2025-03-18T07:05:44.669881490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:44.670085 containerd[1606]: time="2025-03-18T07:05:44.669975055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:05:44.720712 containerd[1606]: time="2025-03-18T07:05:44.720487224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gr4pn,Uid:bb36754b-5416-4dcd-8180-7b7a9314c8a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\"" Mar 18 07:05:49.377368 kubelet[2941]: I0318 07:05:49.377297 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nrkpx" podStartSLOduration=6.377279916 podStartE2EDuration="6.377279916s" podCreationTimestamp="2025-03-18 07:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 07:05:45.491942264 +0000 UTC m=+16.248276274" watchObservedRunningTime="2025-03-18 07:05:49.377279916 +0000 UTC m=+20.133613845" Mar 18 07:05:50.256494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149542065.mount: Deactivated successfully. Mar 18 07:05:52.481023 containerd[1606]: time="2025-03-18T07:05:52.480897823Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:52.482326 containerd[1606]: time="2025-03-18T07:05:52.482284837Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 18 07:05:52.483719 containerd[1606]: time="2025-03-18T07:05:52.483573556Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:52.486007 containerd[1606]: time="2025-03-18T07:05:52.485949785Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.017771924s" Mar 18 07:05:52.486077 containerd[1606]: time="2025-03-18T07:05:52.486024215Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 18 07:05:52.488220 containerd[1606]: time="2025-03-18T07:05:52.487881661Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 18 07:05:52.490911 containerd[1606]: time="2025-03-18T07:05:52.490822460Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 18 07:05:52.514288 containerd[1606]: time="2025-03-18T07:05:52.513312537Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\"" Mar 18 07:05:52.514288 containerd[1606]: time="2025-03-18T07:05:52.513791046Z" level=info msg="StartContainer for \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\"" Mar 18 07:05:52.584229 containerd[1606]: time="2025-03-18T07:05:52.584189694Z" level=info msg="StartContainer for \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\" returns successfully" Mar 18 07:05:53.507380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57-rootfs.mount: Deactivated successfully. Mar 18 07:05:53.736538 containerd[1606]: time="2025-03-18T07:05:53.736417478Z" level=info msg="shim disconnected" id=c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57 namespace=k8s.io Mar 18 07:05:53.737712 containerd[1606]: time="2025-03-18T07:05:53.737289926Z" level=warning msg="cleaning up after shim disconnected" id=c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57 namespace=k8s.io Mar 18 07:05:53.737712 containerd[1606]: time="2025-03-18T07:05:53.737366960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:05:54.485612 containerd[1606]: time="2025-03-18T07:05:54.485311132Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 18 07:05:54.538167 containerd[1606]: time="2025-03-18T07:05:54.538057283Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\"" Mar 18 07:05:54.539546 containerd[1606]: time="2025-03-18T07:05:54.539294345Z" level=info msg="StartContainer for \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\"" Mar 18 07:05:54.594292 systemd[1]: run-containerd-runc-k8s.io-e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a-runc.clbTht.mount: Deactivated successfully. Mar 18 07:05:54.627618 containerd[1606]: time="2025-03-18T07:05:54.627559060Z" level=info msg="StartContainer for \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\" returns successfully" Mar 18 07:05:54.639354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 18 07:05:54.640573 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 18 07:05:54.640631 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 18 07:05:54.647863 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 18 07:05:54.666341 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 18 07:05:54.688066 containerd[1606]: time="2025-03-18T07:05:54.687980035Z" level=info msg="shim disconnected" id=e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a namespace=k8s.io Mar 18 07:05:54.688066 containerd[1606]: time="2025-03-18T07:05:54.688055166Z" level=warning msg="cleaning up after shim disconnected" id=e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a namespace=k8s.io Mar 18 07:05:54.688066 containerd[1606]: time="2025-03-18T07:05:54.688066737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:05:55.483477 containerd[1606]: time="2025-03-18T07:05:55.483388424Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 18 07:05:55.512817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a-rootfs.mount: Deactivated successfully. Mar 18 07:05:55.519569 containerd[1606]: time="2025-03-18T07:05:55.519474319Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\"" Mar 18 07:05:55.521781 containerd[1606]: time="2025-03-18T07:05:55.521655021Z" level=info msg="StartContainer for \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\"" Mar 18 07:05:55.630843 containerd[1606]: time="2025-03-18T07:05:55.630731985Z" level=info msg="StartContainer for \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\" returns successfully" Mar 18 07:05:55.660966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5-rootfs.mount: Deactivated successfully. Mar 18 07:05:55.737859 containerd[1606]: time="2025-03-18T07:05:55.737479928Z" level=info msg="shim disconnected" id=ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5 namespace=k8s.io Mar 18 07:05:55.737859 containerd[1606]: time="2025-03-18T07:05:55.737529931Z" level=warning msg="cleaning up after shim disconnected" id=ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5 namespace=k8s.io Mar 18 07:05:55.737859 containerd[1606]: time="2025-03-18T07:05:55.737539800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:05:56.131493 containerd[1606]: time="2025-03-18T07:05:56.131424499Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 18 07:05:56.131623 containerd[1606]: time="2025-03-18T07:05:56.131582455Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:56.135579 containerd[1606]: time="2025-03-18T07:05:56.135492863Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 18 07:05:56.136947 containerd[1606]: time="2025-03-18T07:05:56.136828780Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.648913125s" Mar 18 07:05:56.136947 containerd[1606]: time="2025-03-18T07:05:56.136867793Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 18 07:05:56.139486 containerd[1606]: time="2025-03-18T07:05:56.139328190Z" level=info msg="CreateContainer within sandbox \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 18 07:05:56.156084 containerd[1606]: time="2025-03-18T07:05:56.156044107Z" level=info msg="CreateContainer within sandbox \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\"" Mar 18 07:05:56.157322 containerd[1606]: time="2025-03-18T07:05:56.157302408Z" level=info msg="StartContainer for \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\"" Mar 18 07:05:56.209213 containerd[1606]: time="2025-03-18T07:05:56.209034620Z" level=info msg="StartContainer for \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\" returns successfully" Mar 18 07:05:56.494501 containerd[1606]: time="2025-03-18T07:05:56.494010701Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 18 07:05:56.532991 containerd[1606]: time="2025-03-18T07:05:56.532452275Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\"" Mar 18 07:05:56.533621 containerd[1606]: time="2025-03-18T07:05:56.533588416Z" level=info msg="StartContainer for \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\"" Mar 18 07:05:56.537206 kubelet[2941]: I0318 07:05:56.535357 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gr4pn" podStartSLOduration=1.119552368 podStartE2EDuration="12.535342228s" podCreationTimestamp="2025-03-18 07:05:44 +0000 UTC" firstStartedPulling="2025-03-18 07:05:44.721827009 +0000 UTC m=+15.478160928" lastFinishedPulling="2025-03-18 07:05:56.137616859 +0000 UTC m=+26.893950788" observedRunningTime="2025-03-18 07:05:56.501024793 +0000 UTC m=+27.257358743" watchObservedRunningTime="2025-03-18 07:05:56.535342228 +0000 UTC m=+27.291676147" Mar 18 07:05:56.644286 containerd[1606]: time="2025-03-18T07:05:56.644243264Z" level=info msg="StartContainer for \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\" returns successfully" Mar 18 07:05:56.889592 containerd[1606]: time="2025-03-18T07:05:56.889362536Z" level=info msg="shim disconnected" id=469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088 namespace=k8s.io Mar 18 07:05:56.892595 containerd[1606]: time="2025-03-18T07:05:56.889551682Z" level=warning msg="cleaning up after shim disconnected" id=469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088 namespace=k8s.io Mar 18 07:05:56.892595 containerd[1606]: time="2025-03-18T07:05:56.890119607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:05:57.517871 containerd[1606]: time="2025-03-18T07:05:57.515251662Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 18 07:05:57.515987 systemd[1]: run-containerd-runc-k8s.io-469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088-runc.ih4Pky.mount: Deactivated successfully. Mar 18 07:05:57.516360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088-rootfs.mount: Deactivated successfully. Mar 18 07:05:57.568510 containerd[1606]: time="2025-03-18T07:05:57.565800762Z" level=info msg="CreateContainer within sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\"" Mar 18 07:05:57.574450 containerd[1606]: time="2025-03-18T07:05:57.573383739Z" level=info msg="StartContainer for \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\"" Mar 18 07:05:57.678560 containerd[1606]: time="2025-03-18T07:05:57.678522491Z" level=info msg="StartContainer for \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\" returns successfully" Mar 18 07:05:57.798235 kubelet[2941]: I0318 07:05:57.795791 2941 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 18 07:05:57.823371 kubelet[2941]: I0318 07:05:57.823314 2941 topology_manager.go:215] "Topology Admit Handler" podUID="b0f56f8d-aa80-48c8-9cde-f0724f7ebf2d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k25w6" Mar 18 07:05:57.828126 kubelet[2941]: I0318 07:05:57.828092 2941 topology_manager.go:215] "Topology Admit Handler" podUID="e9355e30-d408-4943-a6a7-3330ce993274" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hlmdl" Mar 18 07:05:57.867600 kubelet[2941]: I0318 07:05:57.867564 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9k28\" (UniqueName: \"kubernetes.io/projected/b0f56f8d-aa80-48c8-9cde-f0724f7ebf2d-kube-api-access-v9k28\") pod \"coredns-7db6d8ff4d-k25w6\" (UID: \"b0f56f8d-aa80-48c8-9cde-f0724f7ebf2d\") " pod="kube-system/coredns-7db6d8ff4d-k25w6" Mar 18 07:05:57.867691 kubelet[2941]: I0318 07:05:57.867611 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0f56f8d-aa80-48c8-9cde-f0724f7ebf2d-config-volume\") pod \"coredns-7db6d8ff4d-k25w6\" (UID: \"b0f56f8d-aa80-48c8-9cde-f0724f7ebf2d\") " pod="kube-system/coredns-7db6d8ff4d-k25w6" Mar 18 07:05:57.867691 kubelet[2941]: I0318 07:05:57.867671 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9355e30-d408-4943-a6a7-3330ce993274-config-volume\") pod \"coredns-7db6d8ff4d-hlmdl\" (UID: \"e9355e30-d408-4943-a6a7-3330ce993274\") " pod="kube-system/coredns-7db6d8ff4d-hlmdl" Mar 18 07:05:57.867746 kubelet[2941]: I0318 07:05:57.867693 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5rmw\" (UniqueName: \"kubernetes.io/projected/e9355e30-d408-4943-a6a7-3330ce993274-kube-api-access-g5rmw\") pod \"coredns-7db6d8ff4d-hlmdl\" (UID: \"e9355e30-d408-4943-a6a7-3330ce993274\") " pod="kube-system/coredns-7db6d8ff4d-hlmdl" Mar 18 07:05:58.134512 containerd[1606]: time="2025-03-18T07:05:58.132615256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hlmdl,Uid:e9355e30-d408-4943-a6a7-3330ce993274,Namespace:kube-system,Attempt:0,}" Mar 18 07:05:58.141662 containerd[1606]: time="2025-03-18T07:05:58.141625121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k25w6,Uid:b0f56f8d-aa80-48c8-9cde-f0724f7ebf2d,Namespace:kube-system,Attempt:0,}" Mar 18 07:05:58.539364 kubelet[2941]: I0318 07:05:58.537748 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tthpk" podStartSLOduration=7.516487906 podStartE2EDuration="15.537710598s" podCreationTimestamp="2025-03-18 07:05:43 +0000 UTC" firstStartedPulling="2025-03-18 07:05:44.466357974 +0000 UTC m=+15.222691893" lastFinishedPulling="2025-03-18 07:05:52.487580666 +0000 UTC m=+23.243914585" observedRunningTime="2025-03-18 07:05:58.537611512 +0000 UTC m=+29.293945461" watchObservedRunningTime="2025-03-18 07:05:58.537710598 +0000 UTC m=+29.294044567" Mar 18 07:05:59.789640 systemd-networkd[1210]: cilium_host: Link UP Mar 18 07:05:59.791764 systemd-networkd[1210]: cilium_net: Link UP Mar 18 07:05:59.792506 systemd-networkd[1210]: cilium_net: Gained carrier Mar 18 07:05:59.792901 systemd-networkd[1210]: cilium_host: Gained carrier Mar 18 07:05:59.890692 systemd-networkd[1210]: cilium_vxlan: Link UP Mar 18 07:05:59.890699 systemd-networkd[1210]: cilium_vxlan: Gained carrier Mar 18 07:06:00.140287 systemd-networkd[1210]: cilium_host: Gained IPv6LL Mar 18 07:06:00.180491 kernel: NET: Registered PF_ALG protocol family Mar 18 07:06:00.746594 systemd-networkd[1210]: cilium_net: Gained IPv6LL Mar 18 07:06:00.902716 systemd-networkd[1210]: lxc_health: Link UP Mar 18 07:06:00.903307 systemd-networkd[1210]: lxc_health: Gained carrier Mar 18 07:06:01.202033 systemd-networkd[1210]: lxc363b3f8250fe: Link UP Mar 18 07:06:01.207490 kernel: eth0: renamed from tmpc86b9 Mar 18 07:06:01.219198 systemd-networkd[1210]: lxc363b3f8250fe: Gained carrier Mar 18 07:06:01.228633 systemd-networkd[1210]: lxc8f3074580e0f: Link UP Mar 18 07:06:01.245229 kernel: eth0: renamed from tmp57b8d Mar 18 07:06:01.249572 systemd-networkd[1210]: lxc8f3074580e0f: Gained carrier Mar 18 07:06:01.578693 systemd-networkd[1210]: cilium_vxlan: Gained IPv6LL Mar 18 07:06:02.602727 systemd-networkd[1210]: lxc_health: Gained IPv6LL Mar 18 07:06:02.606565 systemd-networkd[1210]: lxc363b3f8250fe: Gained IPv6LL Mar 18 07:06:03.178836 systemd-networkd[1210]: lxc8f3074580e0f: Gained IPv6LL Mar 18 07:06:05.659469 containerd[1606]: time="2025-03-18T07:06:05.657506894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 07:06:05.659469 containerd[1606]: time="2025-03-18T07:06:05.658732714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 07:06:05.659469 containerd[1606]: time="2025-03-18T07:06:05.658858789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:06:05.662429 containerd[1606]: time="2025-03-18T07:06:05.659487909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:06:05.712641 containerd[1606]: time="2025-03-18T07:06:05.712348295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 07:06:05.713209 containerd[1606]: time="2025-03-18T07:06:05.712667629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 07:06:05.713209 containerd[1606]: time="2025-03-18T07:06:05.712842043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:06:05.713779 containerd[1606]: time="2025-03-18T07:06:05.713479979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:06:05.777812 containerd[1606]: time="2025-03-18T07:06:05.777770123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hlmdl,Uid:e9355e30-d408-4943-a6a7-3330ce993274,Namespace:kube-system,Attempt:0,} returns sandbox id \"c86b9a2fa949c965d37ce35f14aa78b55784d709103a75362d020b5acccd6264\"" Mar 18 07:06:05.786473 containerd[1606]: time="2025-03-18T07:06:05.785912478Z" level=info msg="CreateContainer within sandbox \"c86b9a2fa949c965d37ce35f14aa78b55784d709103a75362d020b5acccd6264\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 18 07:06:05.830082 containerd[1606]: time="2025-03-18T07:06:05.829465152Z" level=info msg="CreateContainer within sandbox \"c86b9a2fa949c965d37ce35f14aa78b55784d709103a75362d020b5acccd6264\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcfe07a9dc80861f43107e6ee59c8095c8157a863d151ec1ad1edf988abf4545\"" Mar 18 07:06:05.830351 containerd[1606]: time="2025-03-18T07:06:05.830325732Z" level=info msg="StartContainer for \"dcfe07a9dc80861f43107e6ee59c8095c8157a863d151ec1ad1edf988abf4545\"" Mar 18 07:06:05.845849 containerd[1606]: time="2025-03-18T07:06:05.845807117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k25w6,Uid:b0f56f8d-aa80-48c8-9cde-f0724f7ebf2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"57b8dfd91b8e08128c5da6ede12e7a92352221f6eb44f658ffd713d9663b63ce\"" Mar 18 07:06:05.850088 containerd[1606]: time="2025-03-18T07:06:05.850061890Z" level=info msg="CreateContainer within sandbox \"57b8dfd91b8e08128c5da6ede12e7a92352221f6eb44f658ffd713d9663b63ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 18 07:06:05.870461 containerd[1606]: time="2025-03-18T07:06:05.870136066Z" level=info msg="CreateContainer within sandbox \"57b8dfd91b8e08128c5da6ede12e7a92352221f6eb44f658ffd713d9663b63ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7eda3502ee25d07ee3332829cc611d5f6616370e33fda75241888ae526058ed\"" Mar 18 07:06:05.871047 containerd[1606]: time="2025-03-18T07:06:05.870764004Z" level=info msg="StartContainer for \"c7eda3502ee25d07ee3332829cc611d5f6616370e33fda75241888ae526058ed\"" Mar 18 07:06:05.917456 containerd[1606]: time="2025-03-18T07:06:05.916011640Z" level=info msg="StartContainer for \"dcfe07a9dc80861f43107e6ee59c8095c8157a863d151ec1ad1edf988abf4545\" returns successfully" Mar 18 07:06:05.945080 containerd[1606]: time="2025-03-18T07:06:05.945036524Z" level=info msg="StartContainer for \"c7eda3502ee25d07ee3332829cc611d5f6616370e33fda75241888ae526058ed\" returns successfully" Mar 18 07:06:06.560193 kubelet[2941]: I0318 07:06:06.558183 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hlmdl" podStartSLOduration=22.558151674 podStartE2EDuration="22.558151674s" podCreationTimestamp="2025-03-18 07:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 07:06:06.5548595 +0000 UTC m=+37.311193479" watchObservedRunningTime="2025-03-18 07:06:06.558151674 +0000 UTC m=+37.314485653" Mar 18 07:06:13.643515 kubelet[2941]: I0318 07:06:13.642769 2941 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 07:06:13.682312 kubelet[2941]: I0318 07:06:13.681956 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k25w6" podStartSLOduration=29.68192609 podStartE2EDuration="29.68192609s" podCreationTimestamp="2025-03-18 07:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 07:06:06.617633511 +0000 UTC m=+37.373967470" watchObservedRunningTime="2025-03-18 07:06:13.68192609 +0000 UTC m=+44.438260059" Mar 18 07:06:58.779363 systemd[1]: Started sshd@9-172.24.4.138:22-172.24.4.1:34472.service - OpenSSH per-connection server daemon (172.24.4.1:34472). Mar 18 07:06:59.991108 sshd[4304]: Accepted publickey for core from 172.24.4.1 port 34472 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:06:59.993827 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:00.005051 systemd-logind[1585]: New session 12 of user core. Mar 18 07:07:00.016125 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 18 07:07:00.798640 sshd[4307]: Connection closed by 172.24.4.1 port 34472 Mar 18 07:07:00.799397 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:00.807790 systemd[1]: sshd@9-172.24.4.138:22-172.24.4.1:34472.service: Deactivated successfully. Mar 18 07:07:00.814645 systemd[1]: session-12.scope: Deactivated successfully. Mar 18 07:07:00.816775 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Mar 18 07:07:00.819732 systemd-logind[1585]: Removed session 12. Mar 18 07:07:05.815005 systemd[1]: Started sshd@10-172.24.4.138:22-172.24.4.1:47114.service - OpenSSH per-connection server daemon (172.24.4.1:47114). Mar 18 07:07:07.108379 sshd[4319]: Accepted publickey for core from 172.24.4.1 port 47114 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:07.110891 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:07.122051 systemd-logind[1585]: New session 13 of user core. Mar 18 07:07:07.128940 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 18 07:07:08.021534 sshd[4322]: Connection closed by 172.24.4.1 port 47114 Mar 18 07:07:08.022636 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:08.028761 systemd[1]: sshd@10-172.24.4.138:22-172.24.4.1:47114.service: Deactivated successfully. Mar 18 07:07:08.037655 systemd[1]: session-13.scope: Deactivated successfully. Mar 18 07:07:08.040568 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Mar 18 07:07:08.043827 systemd-logind[1585]: Removed session 13. Mar 18 07:07:13.035953 systemd[1]: Started sshd@11-172.24.4.138:22-172.24.4.1:47116.service - OpenSSH per-connection server daemon (172.24.4.1:47116). Mar 18 07:07:14.221727 sshd[4334]: Accepted publickey for core from 172.24.4.1 port 47116 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:14.224421 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:14.233533 systemd-logind[1585]: New session 14 of user core. Mar 18 07:07:14.240155 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 18 07:07:15.162967 sshd[4337]: Connection closed by 172.24.4.1 port 47116 Mar 18 07:07:15.164143 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:15.170151 systemd[1]: sshd@11-172.24.4.138:22-172.24.4.1:47116.service: Deactivated successfully. Mar 18 07:07:15.178971 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Mar 18 07:07:15.179994 systemd[1]: session-14.scope: Deactivated successfully. Mar 18 07:07:15.182735 systemd-logind[1585]: Removed session 14. Mar 18 07:07:20.178121 systemd[1]: Started sshd@12-172.24.4.138:22-172.24.4.1:58612.service - OpenSSH per-connection server daemon (172.24.4.1:58612). Mar 18 07:07:21.455641 sshd[4351]: Accepted publickey for core from 172.24.4.1 port 58612 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:21.458258 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:21.469577 systemd-logind[1585]: New session 15 of user core. Mar 18 07:07:21.478269 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 18 07:07:22.187498 sshd[4354]: Connection closed by 172.24.4.1 port 58612 Mar 18 07:07:22.189806 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:22.198548 systemd[1]: Started sshd@13-172.24.4.138:22-172.24.4.1:58620.service - OpenSSH per-connection server daemon (172.24.4.1:58620). Mar 18 07:07:22.200804 systemd[1]: sshd@12-172.24.4.138:22-172.24.4.1:58612.service: Deactivated successfully. Mar 18 07:07:22.210982 systemd[1]: session-15.scope: Deactivated successfully. Mar 18 07:07:22.214722 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Mar 18 07:07:22.219914 systemd-logind[1585]: Removed session 15. Mar 18 07:07:23.384159 sshd[4362]: Accepted publickey for core from 172.24.4.1 port 58620 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:23.387751 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:23.397423 systemd-logind[1585]: New session 16 of user core. Mar 18 07:07:23.414100 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 18 07:07:24.155767 sshd[4368]: Connection closed by 172.24.4.1 port 58620 Mar 18 07:07:24.156366 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:24.174348 systemd[1]: Started sshd@14-172.24.4.138:22-172.24.4.1:34904.service - OpenSSH per-connection server daemon (172.24.4.1:34904). Mar 18 07:07:24.177098 systemd[1]: sshd@13-172.24.4.138:22-172.24.4.1:58620.service: Deactivated successfully. Mar 18 07:07:24.190624 systemd[1]: session-16.scope: Deactivated successfully. Mar 18 07:07:24.200380 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Mar 18 07:07:24.203089 systemd-logind[1585]: Removed session 16. Mar 18 07:07:25.316359 sshd[4373]: Accepted publickey for core from 172.24.4.1 port 34904 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:25.319125 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:25.331854 systemd-logind[1585]: New session 17 of user core. Mar 18 07:07:25.336755 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 18 07:07:26.046892 sshd[4379]: Connection closed by 172.24.4.1 port 34904 Mar 18 07:07:26.047396 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:26.052432 systemd[1]: sshd@14-172.24.4.138:22-172.24.4.1:34904.service: Deactivated successfully. Mar 18 07:07:26.056760 systemd[1]: session-17.scope: Deactivated successfully. Mar 18 07:07:26.058133 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Mar 18 07:07:26.060371 systemd-logind[1585]: Removed session 17. Mar 18 07:07:31.058950 systemd[1]: Started sshd@15-172.24.4.138:22-172.24.4.1:34916.service - OpenSSH per-connection server daemon (172.24.4.1:34916). Mar 18 07:07:32.267158 sshd[4392]: Accepted publickey for core from 172.24.4.1 port 34916 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:32.269922 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:32.281198 systemd-logind[1585]: New session 18 of user core. Mar 18 07:07:32.291264 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 18 07:07:33.130497 sshd[4395]: Connection closed by 172.24.4.1 port 34916 Mar 18 07:07:33.130783 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:33.142295 systemd[1]: Started sshd@16-172.24.4.138:22-172.24.4.1:34924.service - OpenSSH per-connection server daemon (172.24.4.1:34924). Mar 18 07:07:33.143368 systemd[1]: sshd@15-172.24.4.138:22-172.24.4.1:34916.service: Deactivated successfully. Mar 18 07:07:33.158945 systemd[1]: session-18.scope: Deactivated successfully. Mar 18 07:07:33.164747 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Mar 18 07:07:33.168512 systemd-logind[1585]: Removed session 18. Mar 18 07:07:34.330123 sshd[4403]: Accepted publickey for core from 172.24.4.1 port 34924 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:34.332791 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:34.344561 systemd-logind[1585]: New session 19 of user core. Mar 18 07:07:34.352960 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 18 07:07:35.198017 sshd[4409]: Connection closed by 172.24.4.1 port 34924 Mar 18 07:07:35.199831 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:35.210128 systemd[1]: Started sshd@17-172.24.4.138:22-172.24.4.1:37762.service - OpenSSH per-connection server daemon (172.24.4.1:37762). Mar 18 07:07:35.213145 systemd[1]: sshd@16-172.24.4.138:22-172.24.4.1:34924.service: Deactivated successfully. Mar 18 07:07:35.222603 systemd[1]: session-19.scope: Deactivated successfully. Mar 18 07:07:35.224897 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Mar 18 07:07:35.229411 systemd-logind[1585]: Removed session 19. Mar 18 07:07:36.358949 sshd[4415]: Accepted publickey for core from 172.24.4.1 port 37762 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:36.361715 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:36.372400 systemd-logind[1585]: New session 20 of user core. Mar 18 07:07:36.383010 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 18 07:07:39.126069 sshd[4421]: Connection closed by 172.24.4.1 port 37762 Mar 18 07:07:39.127662 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:39.143027 systemd[1]: Started sshd@18-172.24.4.138:22-172.24.4.1:37766.service - OpenSSH per-connection server daemon (172.24.4.1:37766). Mar 18 07:07:39.144209 systemd[1]: sshd@17-172.24.4.138:22-172.24.4.1:37762.service: Deactivated successfully. Mar 18 07:07:39.149888 systemd[1]: session-20.scope: Deactivated successfully. Mar 18 07:07:39.153548 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Mar 18 07:07:39.157023 systemd-logind[1585]: Removed session 20. Mar 18 07:07:40.334773 sshd[4436]: Accepted publickey for core from 172.24.4.1 port 37766 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:40.337694 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:40.347594 systemd-logind[1585]: New session 21 of user core. Mar 18 07:07:40.353958 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 18 07:07:41.395484 sshd[4441]: Connection closed by 172.24.4.1 port 37766 Mar 18 07:07:41.396613 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:41.411375 systemd[1]: Started sshd@19-172.24.4.138:22-172.24.4.1:37782.service - OpenSSH per-connection server daemon (172.24.4.1:37782). Mar 18 07:07:41.416242 systemd[1]: sshd@18-172.24.4.138:22-172.24.4.1:37766.service: Deactivated successfully. Mar 18 07:07:41.425812 systemd[1]: session-21.scope: Deactivated successfully. Mar 18 07:07:41.432078 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Mar 18 07:07:41.436735 systemd-logind[1585]: Removed session 21. Mar 18 07:07:42.563335 sshd[4447]: Accepted publickey for core from 172.24.4.1 port 37782 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:42.566149 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:42.576842 systemd-logind[1585]: New session 22 of user core. Mar 18 07:07:42.585082 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 18 07:07:43.295849 sshd[4453]: Connection closed by 172.24.4.1 port 37782 Mar 18 07:07:43.296911 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:43.302661 systemd[1]: sshd@19-172.24.4.138:22-172.24.4.1:37782.service: Deactivated successfully. Mar 18 07:07:43.310414 systemd[1]: session-22.scope: Deactivated successfully. Mar 18 07:07:43.313897 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Mar 18 07:07:43.316482 systemd-logind[1585]: Removed session 22. Mar 18 07:07:48.312960 systemd[1]: Started sshd@20-172.24.4.138:22-172.24.4.1:58138.service - OpenSSH per-connection server daemon (172.24.4.1:58138). Mar 18 07:07:49.507970 sshd[4468]: Accepted publickey for core from 172.24.4.1 port 58138 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:49.510839 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:49.522148 systemd-logind[1585]: New session 23 of user core. Mar 18 07:07:49.532168 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 18 07:07:50.277901 sshd[4471]: Connection closed by 172.24.4.1 port 58138 Mar 18 07:07:50.278948 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:50.284626 systemd[1]: sshd@20-172.24.4.138:22-172.24.4.1:58138.service: Deactivated successfully. Mar 18 07:07:50.293845 systemd[1]: session-23.scope: Deactivated successfully. Mar 18 07:07:50.299600 systemd-logind[1585]: Session 23 logged out. Waiting for processes to exit. Mar 18 07:07:50.305392 systemd-logind[1585]: Removed session 23. Mar 18 07:07:55.292062 systemd[1]: Started sshd@21-172.24.4.138:22-172.24.4.1:37756.service - OpenSSH per-connection server daemon (172.24.4.1:37756). Mar 18 07:07:56.594567 sshd[4482]: Accepted publickey for core from 172.24.4.1 port 37756 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:07:56.597768 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:07:56.611352 systemd-logind[1585]: New session 24 of user core. Mar 18 07:07:56.617261 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 18 07:07:57.285873 sshd[4485]: Connection closed by 172.24.4.1 port 37756 Mar 18 07:07:57.287012 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Mar 18 07:07:57.293545 systemd[1]: sshd@21-172.24.4.138:22-172.24.4.1:37756.service: Deactivated successfully. Mar 18 07:07:57.293672 systemd-logind[1585]: Session 24 logged out. Waiting for processes to exit. Mar 18 07:07:57.302823 systemd[1]: session-24.scope: Deactivated successfully. Mar 18 07:07:57.305842 systemd-logind[1585]: Removed session 24. Mar 18 07:08:02.301153 systemd[1]: Started sshd@22-172.24.4.138:22-172.24.4.1:37772.service - OpenSSH per-connection server daemon (172.24.4.1:37772). Mar 18 07:08:03.494764 sshd[4497]: Accepted publickey for core from 172.24.4.1 port 37772 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:08:03.497350 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:08:03.506732 systemd-logind[1585]: New session 25 of user core. Mar 18 07:08:03.514063 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 18 07:08:04.224584 sshd[4500]: Connection closed by 172.24.4.1 port 37772 Mar 18 07:08:04.227272 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Mar 18 07:08:04.242122 systemd[1]: Started sshd@23-172.24.4.138:22-172.24.4.1:51654.service - OpenSSH per-connection server daemon (172.24.4.1:51654). Mar 18 07:08:04.248102 systemd[1]: sshd@22-172.24.4.138:22-172.24.4.1:37772.service: Deactivated successfully. Mar 18 07:08:04.263130 systemd-logind[1585]: Session 25 logged out. Waiting for processes to exit. Mar 18 07:08:04.263225 systemd[1]: session-25.scope: Deactivated successfully. Mar 18 07:08:04.269251 systemd-logind[1585]: Removed session 25. Mar 18 07:08:05.535696 sshd[4507]: Accepted publickey for core from 172.24.4.1 port 51654 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:08:05.538250 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:08:05.548916 systemd-logind[1585]: New session 26 of user core. Mar 18 07:08:05.556939 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 18 07:08:07.599293 containerd[1606]: time="2025-03-18T07:08:07.598652550Z" level=info msg="StopContainer for \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\" with timeout 30 (s)" Mar 18 07:08:07.599293 containerd[1606]: time="2025-03-18T07:08:07.599086573Z" level=info msg="Stop container \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\" with signal terminated" Mar 18 07:08:07.624362 containerd[1606]: time="2025-03-18T07:08:07.622913100Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 18 07:08:07.631070 containerd[1606]: time="2025-03-18T07:08:07.630799994Z" level=info msg="StopContainer for \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\" with timeout 2 (s)" Mar 18 07:08:07.632209 containerd[1606]: time="2025-03-18T07:08:07.632106582Z" level=info msg="Stop container \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\" with signal terminated" Mar 18 07:08:07.641553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167-rootfs.mount: Deactivated successfully. Mar 18 07:08:07.644769 systemd-networkd[1210]: lxc_health: Link DOWN Mar 18 07:08:07.644776 systemd-networkd[1210]: lxc_health: Lost carrier Mar 18 07:08:07.657759 containerd[1606]: time="2025-03-18T07:08:07.657698716Z" level=info msg="shim disconnected" id=d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167 namespace=k8s.io Mar 18 07:08:07.657759 containerd[1606]: time="2025-03-18T07:08:07.657748630Z" level=warning msg="cleaning up after shim disconnected" id=d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167 namespace=k8s.io Mar 18 07:08:07.657759 containerd[1606]: time="2025-03-18T07:08:07.657758428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:08:07.676871 containerd[1606]: time="2025-03-18T07:08:07.676517250Z" level=warning msg="cleanup warnings time=\"2025-03-18T07:08:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 18 07:08:07.683030 containerd[1606]: time="2025-03-18T07:08:07.682969207Z" level=info msg="StopContainer for \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\" returns successfully" Mar 18 07:08:07.685712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327-rootfs.mount: Deactivated successfully. Mar 18 07:08:07.689732 containerd[1606]: time="2025-03-18T07:08:07.688590588Z" level=info msg="StopPodSandbox for \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\"" Mar 18 07:08:07.689732 containerd[1606]: time="2025-03-18T07:08:07.688633388Z" level=info msg="Container to stop \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 07:08:07.692394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c-shm.mount: Deactivated successfully. Mar 18 07:08:07.696273 containerd[1606]: time="2025-03-18T07:08:07.696062466Z" level=info msg="shim disconnected" id=98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327 namespace=k8s.io Mar 18 07:08:07.696581 containerd[1606]: time="2025-03-18T07:08:07.696562402Z" level=warning msg="cleaning up after shim disconnected" id=98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327 namespace=k8s.io Mar 18 07:08:07.696688 containerd[1606]: time="2025-03-18T07:08:07.696672569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:08:07.729353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c-rootfs.mount: Deactivated successfully. Mar 18 07:08:07.737515 containerd[1606]: time="2025-03-18T07:08:07.737388936Z" level=info msg="StopContainer for \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\" returns successfully" Mar 18 07:08:07.748046 containerd[1606]: time="2025-03-18T07:08:07.738066175Z" level=info msg="StopPodSandbox for \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\"" Mar 18 07:08:07.748046 containerd[1606]: time="2025-03-18T07:08:07.738098565Z" level=info msg="Container to stop \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 07:08:07.748046 containerd[1606]: time="2025-03-18T07:08:07.738142688Z" level=info msg="Container to stop \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 07:08:07.748046 containerd[1606]: time="2025-03-18T07:08:07.738159059Z" level=info msg="Container to stop \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 07:08:07.748046 containerd[1606]: time="2025-03-18T07:08:07.738173466Z" level=info msg="Container to stop \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 07:08:07.748046 containerd[1606]: time="2025-03-18T07:08:07.738184356Z" level=info msg="Container to stop \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 07:08:07.760498 containerd[1606]: time="2025-03-18T07:08:07.760147962Z" level=info msg="shim disconnected" id=c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c namespace=k8s.io Mar 18 07:08:07.760498 containerd[1606]: time="2025-03-18T07:08:07.760371742Z" level=warning msg="cleaning up after shim disconnected" id=c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c namespace=k8s.io Mar 18 07:08:07.760498 containerd[1606]: time="2025-03-18T07:08:07.760385247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:08:07.776066 containerd[1606]: time="2025-03-18T07:08:07.776023904Z" level=info msg="TearDown network for sandbox \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\" successfully" Mar 18 07:08:07.776066 containerd[1606]: time="2025-03-18T07:08:07.776062677Z" level=info msg="StopPodSandbox for \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\" returns successfully" Mar 18 07:08:07.784319 containerd[1606]: time="2025-03-18T07:08:07.784158693Z" level=info msg="shim disconnected" id=ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c namespace=k8s.io Mar 18 07:08:07.784319 containerd[1606]: time="2025-03-18T07:08:07.784203537Z" level=warning msg="cleaning up after shim disconnected" id=ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c namespace=k8s.io Mar 18 07:08:07.784319 containerd[1606]: time="2025-03-18T07:08:07.784213085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:08:07.800046 containerd[1606]: time="2025-03-18T07:08:07.799944437Z" level=info msg="TearDown network for sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" successfully" Mar 18 07:08:07.800046 containerd[1606]: time="2025-03-18T07:08:07.799973251Z" level=info msg="StopPodSandbox for \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" returns successfully" Mar 18 07:08:07.918543 kubelet[2941]: I0318 07:08:07.918266 2941 scope.go:117] "RemoveContainer" containerID="d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167" Mar 18 07:08:07.924179 containerd[1606]: time="2025-03-18T07:08:07.924093496Z" level=info msg="RemoveContainer for \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\"" Mar 18 07:08:07.948282 containerd[1606]: time="2025-03-18T07:08:07.948202371Z" level=info msg="RemoveContainer for \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\" returns successfully" Mar 18 07:08:07.948599 kubelet[2941]: I0318 07:08:07.948475 2941 scope.go:117] "RemoveContainer" containerID="d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167" Mar 18 07:08:07.949289 containerd[1606]: time="2025-03-18T07:08:07.949099141Z" level=error msg="ContainerStatus for \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\": not found" Mar 18 07:08:07.949542 kubelet[2941]: E0318 07:08:07.949366 2941 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\": not found" containerID="d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167" Mar 18 07:08:07.949542 kubelet[2941]: I0318 07:08:07.949402 2941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167"} err="failed to get container status \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6d35eab7ebce92c0f0abda276209ac70356c9b33f43df741c0c5433c7789167\": not found" Mar 18 07:08:07.949542 kubelet[2941]: I0318 07:08:07.949479 2941 scope.go:117] "RemoveContainer" containerID="98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327" Mar 18 07:08:07.950962 containerd[1606]: time="2025-03-18T07:08:07.950420436Z" level=info msg="RemoveContainer for \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\"" Mar 18 07:08:07.963579 containerd[1606]: time="2025-03-18T07:08:07.963524855Z" level=info msg="RemoveContainer for \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\" returns successfully" Mar 18 07:08:07.964187 kubelet[2941]: I0318 07:08:07.964145 2941 scope.go:117] "RemoveContainer" containerID="469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088" Mar 18 07:08:07.966590 kubelet[2941]: I0318 07:08:07.965623 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-host-proc-sys-kernel\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.966590 kubelet[2941]: I0318 07:08:07.965694 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-run\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.966590 kubelet[2941]: I0318 07:08:07.965738 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-etc-cni-netd\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.966590 kubelet[2941]: I0318 07:08:07.965782 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-hostproc\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.966590 kubelet[2941]: I0318 07:08:07.965820 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-xtables-lock\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.966590 kubelet[2941]: I0318 07:08:07.965861 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-host-proc-sys-net\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.967066 kubelet[2941]: I0318 07:08:07.965914 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-clustermesh-secrets\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.967066 kubelet[2941]: I0318 07:08:07.965957 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-lib-modules\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.967066 kubelet[2941]: I0318 07:08:07.965995 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-cgroup\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.967066 kubelet[2941]: I0318 07:08:07.966059 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb36754b-5416-4dcd-8180-7b7a9314c8a3-cilium-config-path\") pod \"bb36754b-5416-4dcd-8180-7b7a9314c8a3\" (UID: \"bb36754b-5416-4dcd-8180-7b7a9314c8a3\") " Mar 18 07:08:07.967066 kubelet[2941]: I0318 07:08:07.966100 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-bpf-maps\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.967066 kubelet[2941]: I0318 07:08:07.966146 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6n6z\" (UniqueName: \"kubernetes.io/projected/bb36754b-5416-4dcd-8180-7b7a9314c8a3-kube-api-access-t6n6z\") pod \"bb36754b-5416-4dcd-8180-7b7a9314c8a3\" (UID: \"bb36754b-5416-4dcd-8180-7b7a9314c8a3\") " Mar 18 07:08:07.967421 kubelet[2941]: I0318 07:08:07.966185 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cni-path\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.967421 kubelet[2941]: I0318 07:08:07.966229 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-hubble-tls\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.967421 kubelet[2941]: I0318 07:08:07.966272 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-config-path\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.967421 kubelet[2941]: I0318 07:08:07.966314 2941 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tl8x\" (UniqueName: \"kubernetes.io/projected/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-kube-api-access-9tl8x\") pod \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\" (UID: \"a0d6ced7-34d3-49b7-8050-a8ef1b64a013\") " Mar 18 07:08:07.969006 kubelet[2941]: I0318 07:08:07.965901 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.969285 kubelet[2941]: I0318 07:08:07.965983 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.969285 kubelet[2941]: I0318 07:08:07.966063 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.969285 kubelet[2941]: I0318 07:08:07.966094 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.970246 kubelet[2941]: I0318 07:08:07.966119 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-hostproc" (OuterVolumeSpecName: "hostproc") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.970246 kubelet[2941]: I0318 07:08:07.966144 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.970246 kubelet[2941]: I0318 07:08:07.966177 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.970246 kubelet[2941]: I0318 07:08:07.969738 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.971820 kubelet[2941]: I0318 07:08:07.971730 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.974802 containerd[1606]: time="2025-03-18T07:08:07.974701413Z" level=info msg="RemoveContainer for \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\"" Mar 18 07:08:07.975246 kubelet[2941]: I0318 07:08:07.975161 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cni-path" (OuterVolumeSpecName: "cni-path") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 07:08:07.980797 kubelet[2941]: I0318 07:08:07.980675 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 07:08:07.986352 kubelet[2941]: I0318 07:08:07.986194 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-kube-api-access-9tl8x" (OuterVolumeSpecName: "kube-api-access-9tl8x") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "kube-api-access-9tl8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 07:08:07.989024 containerd[1606]: time="2025-03-18T07:08:07.988893671Z" level=info msg="RemoveContainer for \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\" returns successfully" Mar 18 07:08:07.989978 kubelet[2941]: I0318 07:08:07.989801 2941 scope.go:117] "RemoveContainer" containerID="ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5" Mar 18 07:08:07.995169 containerd[1606]: time="2025-03-18T07:08:07.994556469Z" level=info msg="RemoveContainer for \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\"" Mar 18 07:08:07.995355 kubelet[2941]: I0318 07:08:07.994596 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb36754b-5416-4dcd-8180-7b7a9314c8a3-kube-api-access-t6n6z" (OuterVolumeSpecName: "kube-api-access-t6n6z") pod "bb36754b-5416-4dcd-8180-7b7a9314c8a3" (UID: "bb36754b-5416-4dcd-8180-7b7a9314c8a3"). InnerVolumeSpecName "kube-api-access-t6n6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 07:08:07.998206 kubelet[2941]: I0318 07:08:07.997543 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 07:08:08.000683 kubelet[2941]: I0318 07:08:08.000579 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a0d6ced7-34d3-49b7-8050-a8ef1b64a013" (UID: "a0d6ced7-34d3-49b7-8050-a8ef1b64a013"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 07:08:08.000896 kubelet[2941]: I0318 07:08:08.000756 2941 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb36754b-5416-4dcd-8180-7b7a9314c8a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bb36754b-5416-4dcd-8180-7b7a9314c8a3" (UID: "bb36754b-5416-4dcd-8180-7b7a9314c8a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 07:08:08.009120 containerd[1606]: time="2025-03-18T07:08:08.008953130Z" level=info msg="RemoveContainer for \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\" returns successfully" Mar 18 07:08:08.009405 kubelet[2941]: I0318 07:08:08.009320 2941 scope.go:117] "RemoveContainer" containerID="e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a" Mar 18 07:08:08.012833 containerd[1606]: time="2025-03-18T07:08:08.012757026Z" level=info msg="RemoveContainer for \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\"" Mar 18 07:08:08.024285 containerd[1606]: time="2025-03-18T07:08:08.024156511Z" level=info msg="RemoveContainer for \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\" returns successfully" Mar 18 07:08:08.024557 kubelet[2941]: I0318 07:08:08.024501 2941 scope.go:117] "RemoveContainer" containerID="c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57" Mar 18 07:08:08.026722 containerd[1606]: time="2025-03-18T07:08:08.026666143Z" level=info msg="RemoveContainer for \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\"" Mar 18 07:08:08.035203 containerd[1606]: time="2025-03-18T07:08:08.035149075Z" level=info msg="RemoveContainer for \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\" returns successfully" Mar 18 07:08:08.035842 kubelet[2941]: I0318 07:08:08.035759 2941 scope.go:117] "RemoveContainer" containerID="98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327" Mar 18 07:08:08.036250 containerd[1606]: time="2025-03-18T07:08:08.036150902Z" level=error msg="ContainerStatus for \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\": not found" Mar 18 07:08:08.036590 kubelet[2941]: E0318 07:08:08.036544 2941 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\": not found" containerID="98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327" Mar 18 07:08:08.036796 kubelet[2941]: I0318 07:08:08.036600 2941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327"} err="failed to get container status \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\": rpc error: code = NotFound desc = an error occurred when try to find container \"98626ebdb36e4ac890cbfe7db4c5637df251dfeffcec40ba61e746316f0a1327\": not found" Mar 18 07:08:08.036796 kubelet[2941]: I0318 07:08:08.036645 2941 scope.go:117] "RemoveContainer" containerID="469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088" Mar 18 07:08:08.037166 containerd[1606]: time="2025-03-18T07:08:08.037040037Z" level=error msg="ContainerStatus for \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\": not found" Mar 18 07:08:08.037336 kubelet[2941]: E0318 07:08:08.037287 2941 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\": not found" containerID="469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088" Mar 18 07:08:08.037336 kubelet[2941]: I0318 07:08:08.037332 2941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088"} err="failed to get container status \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\": rpc error: code = NotFound desc = an error occurred when try to find container \"469db4638b30ba596b33aed4db9b1ec342b0cb66f92a7a4b0fe91d7ce8772088\": not found" Mar 18 07:08:08.038049 kubelet[2941]: I0318 07:08:08.037370 2941 scope.go:117] "RemoveContainer" containerID="ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5" Mar 18 07:08:08.038174 containerd[1606]: time="2025-03-18T07:08:08.037759816Z" level=error msg="ContainerStatus for \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\": not found" Mar 18 07:08:08.038240 kubelet[2941]: E0318 07:08:08.038125 2941 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\": not found" containerID="ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5" Mar 18 07:08:08.038240 kubelet[2941]: I0318 07:08:08.038171 2941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5"} err="failed to get container status \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae3b7f1e87a99635039150d0cfe140d56f49909b934e9a671f0f141e17aa80f5\": not found" Mar 18 07:08:08.038240 kubelet[2941]: I0318 07:08:08.038204 2941 scope.go:117] "RemoveContainer" containerID="e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a" Mar 18 07:08:08.038664 containerd[1606]: time="2025-03-18T07:08:08.038571476Z" level=error msg="ContainerStatus for \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\": not found" Mar 18 07:08:08.038882 kubelet[2941]: E0318 07:08:08.038803 2941 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\": not found" containerID="e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a" Mar 18 07:08:08.038882 kubelet[2941]: I0318 07:08:08.038849 2941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a"} err="failed to get container status \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e50e3bf89e606abf8c5fbbdeaa1acbbe2b3ded46140c17cc9b7ae0c0a9e51a2a\": not found" Mar 18 07:08:08.038882 kubelet[2941]: I0318 07:08:08.038881 2941 scope.go:117] "RemoveContainer" containerID="c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57" Mar 18 07:08:08.040087 containerd[1606]: time="2025-03-18T07:08:08.039854931Z" level=error msg="ContainerStatus for \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\": not found" Mar 18 07:08:08.040575 kubelet[2941]: E0318 07:08:08.040430 2941 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\": not found" containerID="c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57" Mar 18 07:08:08.040929 kubelet[2941]: I0318 07:08:08.040700 2941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57"} err="failed to get container status \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1469f4c03c4aae8b5a5309d5e981328ea7ee80c3745ad3d0fb0cbfe31622e57\": not found" Mar 18 07:08:08.067093 kubelet[2941]: I0318 07:08:08.066995 2941 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-etc-cni-netd\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067093 kubelet[2941]: I0318 07:08:08.067088 2941 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-run\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067306 kubelet[2941]: I0318 07:08:08.067118 2941 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-xtables-lock\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067306 kubelet[2941]: I0318 07:08:08.067144 2941 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-host-proc-sys-net\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067306 kubelet[2941]: I0318 07:08:08.067171 2941 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-clustermesh-secrets\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067306 kubelet[2941]: I0318 07:08:08.067196 2941 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-lib-modules\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067306 kubelet[2941]: I0318 07:08:08.067218 2941 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-hostproc\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067306 kubelet[2941]: I0318 07:08:08.067240 2941 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-cgroup\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067306 kubelet[2941]: I0318 07:08:08.067267 2941 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb36754b-5416-4dcd-8180-7b7a9314c8a3-cilium-config-path\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067761 kubelet[2941]: I0318 07:08:08.067290 2941 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-bpf-maps\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067761 kubelet[2941]: I0318 07:08:08.067314 2941 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t6n6z\" (UniqueName: \"kubernetes.io/projected/bb36754b-5416-4dcd-8180-7b7a9314c8a3-kube-api-access-t6n6z\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067761 kubelet[2941]: I0318 07:08:08.067341 2941 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cilium-config-path\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067761 kubelet[2941]: I0318 07:08:08.067366 2941 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-cni-path\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067761 kubelet[2941]: I0318 07:08:08.067388 2941 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-hubble-tls\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067761 kubelet[2941]: I0318 07:08:08.067414 2941 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9tl8x\" (UniqueName: \"kubernetes.io/projected/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-kube-api-access-9tl8x\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.067761 kubelet[2941]: I0318 07:08:08.067479 2941 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a0d6ced7-34d3-49b7-8050-a8ef1b64a013-host-proc-sys-kernel\") on node \"ci-4152-2-2-a-a1f36745dc.novalocal\" DevicePath \"\"" Mar 18 07:08:08.612525 systemd[1]: var-lib-kubelet-pods-bb36754b\x2d5416\x2d4dcd\x2d8180\x2d7b7a9314c8a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt6n6z.mount: Deactivated successfully. Mar 18 07:08:08.612873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c-rootfs.mount: Deactivated successfully. Mar 18 07:08:08.613123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c-shm.mount: Deactivated successfully. Mar 18 07:08:08.613619 systemd[1]: var-lib-kubelet-pods-a0d6ced7\x2d34d3\x2d49b7\x2d8050\x2da8ef1b64a013-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9tl8x.mount: Deactivated successfully. Mar 18 07:08:08.613867 systemd[1]: var-lib-kubelet-pods-a0d6ced7\x2d34d3\x2d49b7\x2d8050\x2da8ef1b64a013-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 18 07:08:08.614110 systemd[1]: var-lib-kubelet-pods-a0d6ced7\x2d34d3\x2d49b7\x2d8050\x2da8ef1b64a013-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 18 07:08:09.367793 kubelet[2941]: I0318 07:08:09.367719 2941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d6ced7-34d3-49b7-8050-a8ef1b64a013" path="/var/lib/kubelet/pods/a0d6ced7-34d3-49b7-8050-a8ef1b64a013/volumes" Mar 18 07:08:09.369366 kubelet[2941]: I0318 07:08:09.369295 2941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb36754b-5416-4dcd-8180-7b7a9314c8a3" path="/var/lib/kubelet/pods/bb36754b-5416-4dcd-8180-7b7a9314c8a3/volumes" Mar 18 07:08:09.479956 kubelet[2941]: E0318 07:08:09.479788 2941 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 18 07:08:09.707089 sshd[4513]: Connection closed by 172.24.4.1 port 51654 Mar 18 07:08:09.708566 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Mar 18 07:08:09.724034 systemd[1]: Started sshd@24-172.24.4.138:22-172.24.4.1:51668.service - OpenSSH per-connection server daemon (172.24.4.1:51668). Mar 18 07:08:09.727058 systemd[1]: sshd@23-172.24.4.138:22-172.24.4.1:51654.service: Deactivated successfully. Mar 18 07:08:09.735046 systemd[1]: session-26.scope: Deactivated successfully. Mar 18 07:08:09.737948 systemd-logind[1585]: Session 26 logged out. Waiting for processes to exit. Mar 18 07:08:09.744208 systemd-logind[1585]: Removed session 26. Mar 18 07:08:10.725552 sshd[4676]: Accepted publickey for core from 172.24.4.1 port 51668 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:08:10.728308 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:08:10.739054 systemd-logind[1585]: New session 27 of user core. Mar 18 07:08:10.744941 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 18 07:08:11.768335 kubelet[2941]: I0318 07:08:11.768288 2941 topology_manager.go:215] "Topology Admit Handler" podUID="85194c93-8789-4ac4-8bc8-408eec6f3a90" podNamespace="kube-system" podName="cilium-fghfq" Mar 18 07:08:11.768335 kubelet[2941]: E0318 07:08:11.768348 2941 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a0d6ced7-34d3-49b7-8050-a8ef1b64a013" containerName="apply-sysctl-overwrites" Mar 18 07:08:11.768782 kubelet[2941]: E0318 07:08:11.768358 2941 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a0d6ced7-34d3-49b7-8050-a8ef1b64a013" containerName="mount-bpf-fs" Mar 18 07:08:11.768782 kubelet[2941]: E0318 07:08:11.768366 2941 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a0d6ced7-34d3-49b7-8050-a8ef1b64a013" containerName="clean-cilium-state" Mar 18 07:08:11.768782 kubelet[2941]: E0318 07:08:11.768374 2941 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a0d6ced7-34d3-49b7-8050-a8ef1b64a013" containerName="mount-cgroup" Mar 18 07:08:11.768782 kubelet[2941]: E0318 07:08:11.768380 2941 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb36754b-5416-4dcd-8180-7b7a9314c8a3" containerName="cilium-operator" Mar 18 07:08:11.768782 kubelet[2941]: E0318 07:08:11.768386 2941 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a0d6ced7-34d3-49b7-8050-a8ef1b64a013" containerName="cilium-agent" Mar 18 07:08:11.768782 kubelet[2941]: I0318 07:08:11.768409 2941 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d6ced7-34d3-49b7-8050-a8ef1b64a013" containerName="cilium-agent" Mar 18 07:08:11.768782 kubelet[2941]: I0318 07:08:11.768416 2941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb36754b-5416-4dcd-8180-7b7a9314c8a3" containerName="cilium-operator" Mar 18 07:08:11.894414 kubelet[2941]: I0318 07:08:11.894294 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85194c93-8789-4ac4-8bc8-408eec6f3a90-cilium-config-path\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.894414 kubelet[2941]: I0318 07:08:11.894336 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-bpf-maps\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.894414 kubelet[2941]: I0318 07:08:11.894360 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-hostproc\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.894414 kubelet[2941]: I0318 07:08:11.894378 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-xtables-lock\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895323 kubelet[2941]: I0318 07:08:11.894475 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-lib-modules\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895323 kubelet[2941]: I0318 07:08:11.894522 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r67n\" (UniqueName: \"kubernetes.io/projected/85194c93-8789-4ac4-8bc8-408eec6f3a90-kube-api-access-9r67n\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895323 kubelet[2941]: I0318 07:08:11.894556 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85194c93-8789-4ac4-8bc8-408eec6f3a90-hubble-tls\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895323 kubelet[2941]: I0318 07:08:11.894599 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-cilium-run\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895323 kubelet[2941]: I0318 07:08:11.894624 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-cilium-cgroup\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895323 kubelet[2941]: I0318 07:08:11.894668 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85194c93-8789-4ac4-8bc8-408eec6f3a90-clustermesh-secrets\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895747 kubelet[2941]: I0318 07:08:11.894710 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-host-proc-sys-net\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895747 kubelet[2941]: I0318 07:08:11.894753 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-host-proc-sys-kernel\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895747 kubelet[2941]: I0318 07:08:11.894793 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-cni-path\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895747 kubelet[2941]: I0318 07:08:11.894837 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/85194c93-8789-4ac4-8bc8-408eec6f3a90-cilium-ipsec-secrets\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.895747 kubelet[2941]: I0318 07:08:11.894877 2941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85194c93-8789-4ac4-8bc8-408eec6f3a90-etc-cni-netd\") pod \"cilium-fghfq\" (UID: \"85194c93-8789-4ac4-8bc8-408eec6f3a90\") " pod="kube-system/cilium-fghfq" Mar 18 07:08:11.911344 sshd[4682]: Connection closed by 172.24.4.1 port 51668 Mar 18 07:08:11.911150 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Mar 18 07:08:11.920620 systemd[1]: Started sshd@25-172.24.4.138:22-172.24.4.1:51676.service - OpenSSH per-connection server daemon (172.24.4.1:51676). Mar 18 07:08:11.921050 systemd[1]: sshd@24-172.24.4.138:22-172.24.4.1:51668.service: Deactivated successfully. Mar 18 07:08:11.927390 systemd-logind[1585]: Session 27 logged out. Waiting for processes to exit. Mar 18 07:08:11.928122 systemd[1]: session-27.scope: Deactivated successfully. Mar 18 07:08:11.934265 systemd-logind[1585]: Removed session 27. Mar 18 07:08:12.080892 containerd[1606]: time="2025-03-18T07:08:12.080831442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fghfq,Uid:85194c93-8789-4ac4-8bc8-408eec6f3a90,Namespace:kube-system,Attempt:0,}" Mar 18 07:08:12.142624 containerd[1606]: time="2025-03-18T07:08:12.141593753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 07:08:12.142624 containerd[1606]: time="2025-03-18T07:08:12.141653536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 07:08:12.142624 containerd[1606]: time="2025-03-18T07:08:12.141671660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:08:12.142624 containerd[1606]: time="2025-03-18T07:08:12.141751010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 07:08:12.186269 containerd[1606]: time="2025-03-18T07:08:12.186139255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fghfq,Uid:85194c93-8789-4ac4-8bc8-408eec6f3a90,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\"" Mar 18 07:08:12.190854 containerd[1606]: time="2025-03-18T07:08:12.190727202Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 18 07:08:12.207603 containerd[1606]: time="2025-03-18T07:08:12.207549944Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"902213545b640e7ec52d714d3627040a5602ad13748762d5a85d3fe2b938875a\"" Mar 18 07:08:12.210718 containerd[1606]: time="2025-03-18T07:08:12.208885163Z" level=info msg="StartContainer for \"902213545b640e7ec52d714d3627040a5602ad13748762d5a85d3fe2b938875a\"" Mar 18 07:08:12.272907 containerd[1606]: time="2025-03-18T07:08:12.272874954Z" level=info msg="StartContainer for \"902213545b640e7ec52d714d3627040a5602ad13748762d5a85d3fe2b938875a\" returns successfully" Mar 18 07:08:12.324342 containerd[1606]: time="2025-03-18T07:08:12.324277063Z" level=info msg="shim disconnected" id=902213545b640e7ec52d714d3627040a5602ad13748762d5a85d3fe2b938875a namespace=k8s.io Mar 18 07:08:12.324645 containerd[1606]: time="2025-03-18T07:08:12.324593600Z" level=warning msg="cleaning up after shim disconnected" id=902213545b640e7ec52d714d3627040a5602ad13748762d5a85d3fe2b938875a namespace=k8s.io Mar 18 07:08:12.324769 containerd[1606]: time="2025-03-18T07:08:12.324753672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:08:12.955423 containerd[1606]: time="2025-03-18T07:08:12.954914806Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 18 07:08:12.984788 containerd[1606]: time="2025-03-18T07:08:12.984704856Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"00f5113a2116f90e55af5d97ff387eb12e7b8b91839d21bd6808992d95a24384\"" Mar 18 07:08:12.987886 containerd[1606]: time="2025-03-18T07:08:12.987598355Z" level=info msg="StartContainer for \"00f5113a2116f90e55af5d97ff387eb12e7b8b91839d21bd6808992d95a24384\"" Mar 18 07:08:13.076180 containerd[1606]: time="2025-03-18T07:08:13.076135259Z" level=info msg="StartContainer for \"00f5113a2116f90e55af5d97ff387eb12e7b8b91839d21bd6808992d95a24384\" returns successfully" Mar 18 07:08:13.098862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00f5113a2116f90e55af5d97ff387eb12e7b8b91839d21bd6808992d95a24384-rootfs.mount: Deactivated successfully. Mar 18 07:08:13.107869 containerd[1606]: time="2025-03-18T07:08:13.107493423Z" level=info msg="shim disconnected" id=00f5113a2116f90e55af5d97ff387eb12e7b8b91839d21bd6808992d95a24384 namespace=k8s.io Mar 18 07:08:13.107869 containerd[1606]: time="2025-03-18T07:08:13.107540912Z" level=warning msg="cleaning up after shim disconnected" id=00f5113a2116f90e55af5d97ff387eb12e7b8b91839d21bd6808992d95a24384 namespace=k8s.io Mar 18 07:08:13.107869 containerd[1606]: time="2025-03-18T07:08:13.107549598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:08:13.120039 containerd[1606]: time="2025-03-18T07:08:13.119941638Z" level=warning msg="cleanup warnings time=\"2025-03-18T07:08:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 18 07:08:13.221946 sshd[4690]: Accepted publickey for core from 172.24.4.1 port 51676 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:08:13.225758 sshd-session[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:08:13.236697 systemd-logind[1585]: New session 28 of user core. Mar 18 07:08:13.243949 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 18 07:08:13.356040 kubelet[2941]: I0318 07:08:13.355563 2941 setters.go:580] "Node became not ready" node="ci-4152-2-2-a-a1f36745dc.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-18T07:08:13Z","lastTransitionTime":"2025-03-18T07:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 18 07:08:13.775123 sshd[4864]: Connection closed by 172.24.4.1 port 51676 Mar 18 07:08:13.776779 sshd-session[4690]: pam_unix(sshd:session): session closed for user core Mar 18 07:08:13.788043 systemd[1]: Started sshd@26-172.24.4.138:22-172.24.4.1:57396.service - OpenSSH per-connection server daemon (172.24.4.1:57396). Mar 18 07:08:13.789104 systemd[1]: sshd@25-172.24.4.138:22-172.24.4.1:51676.service: Deactivated successfully. Mar 18 07:08:13.801063 systemd[1]: session-28.scope: Deactivated successfully. Mar 18 07:08:13.805231 systemd-logind[1585]: Session 28 logged out. Waiting for processes to exit. Mar 18 07:08:13.809145 systemd-logind[1585]: Removed session 28. Mar 18 07:08:13.965318 containerd[1606]: time="2025-03-18T07:08:13.964797729Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 18 07:08:14.020958 containerd[1606]: time="2025-03-18T07:08:14.020889473Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"12e4d87f54abc9fb93dba29bfcb2c118a4345f1339f6e8163edd653b64e5a8c4\"" Mar 18 07:08:14.023538 containerd[1606]: time="2025-03-18T07:08:14.022687505Z" level=info msg="StartContainer for \"12e4d87f54abc9fb93dba29bfcb2c118a4345f1339f6e8163edd653b64e5a8c4\"" Mar 18 07:08:14.101967 containerd[1606]: time="2025-03-18T07:08:14.101869873Z" level=info msg="StartContainer for \"12e4d87f54abc9fb93dba29bfcb2c118a4345f1339f6e8163edd653b64e5a8c4\" returns successfully" Mar 18 07:08:14.121748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12e4d87f54abc9fb93dba29bfcb2c118a4345f1339f6e8163edd653b64e5a8c4-rootfs.mount: Deactivated successfully. Mar 18 07:08:14.132980 containerd[1606]: time="2025-03-18T07:08:14.132907837Z" level=info msg="shim disconnected" id=12e4d87f54abc9fb93dba29bfcb2c118a4345f1339f6e8163edd653b64e5a8c4 namespace=k8s.io Mar 18 07:08:14.132980 containerd[1606]: time="2025-03-18T07:08:14.132961698Z" level=warning msg="cleaning up after shim disconnected" id=12e4d87f54abc9fb93dba29bfcb2c118a4345f1339f6e8163edd653b64e5a8c4 namespace=k8s.io Mar 18 07:08:14.132980 containerd[1606]: time="2025-03-18T07:08:14.132972889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:08:14.481882 kubelet[2941]: E0318 07:08:14.481633 2941 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 18 07:08:14.970674 containerd[1606]: time="2025-03-18T07:08:14.970352783Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 18 07:08:14.997957 containerd[1606]: time="2025-03-18T07:08:14.997863662Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"666ea2833fd8d551a8e6c43f2bba2e2319f4422998eb20b32350170954b731b6\"" Mar 18 07:08:15.000667 containerd[1606]: time="2025-03-18T07:08:14.998905207Z" level=info msg="StartContainer for \"666ea2833fd8d551a8e6c43f2bba2e2319f4422998eb20b32350170954b731b6\"" Mar 18 07:08:15.089296 containerd[1606]: time="2025-03-18T07:08:15.089254927Z" level=info msg="StartContainer for \"666ea2833fd8d551a8e6c43f2bba2e2319f4422998eb20b32350170954b731b6\" returns successfully" Mar 18 07:08:15.106092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-666ea2833fd8d551a8e6c43f2bba2e2319f4422998eb20b32350170954b731b6-rootfs.mount: Deactivated successfully. Mar 18 07:08:15.112638 containerd[1606]: time="2025-03-18T07:08:15.112525072Z" level=info msg="shim disconnected" id=666ea2833fd8d551a8e6c43f2bba2e2319f4422998eb20b32350170954b731b6 namespace=k8s.io Mar 18 07:08:15.112638 containerd[1606]: time="2025-03-18T07:08:15.112575819Z" level=warning msg="cleaning up after shim disconnected" id=666ea2833fd8d551a8e6c43f2bba2e2319f4422998eb20b32350170954b731b6 namespace=k8s.io Mar 18 07:08:15.112638 containerd[1606]: time="2025-03-18T07:08:15.112586509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 18 07:08:15.141769 sshd[4868]: Accepted publickey for core from 172.24.4.1 port 57396 ssh2: RSA SHA256:GaTLl7mOYsQLB+EQCHdFQClTpmSOwApV2HNS8pfyec0 Mar 18 07:08:15.141888 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 18 07:08:15.146909 systemd-logind[1585]: New session 29 of user core. Mar 18 07:08:15.151685 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 18 07:08:15.980400 containerd[1606]: time="2025-03-18T07:08:15.980158887Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 18 07:08:16.027808 containerd[1606]: time="2025-03-18T07:08:16.025548833Z" level=info msg="CreateContainer within sandbox \"5b5c158efcd0457e709465217750a584e7a8080e59df8d0ca0f62a967003acc3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8effac8147bbcbffff305fa4d7d2dee6db955a762f900dd321aa83692a335ccc\"" Mar 18 07:08:16.029935 containerd[1606]: time="2025-03-18T07:08:16.029872387Z" level=info msg="StartContainer for \"8effac8147bbcbffff305fa4d7d2dee6db955a762f900dd321aa83692a335ccc\"" Mar 18 07:08:16.168263 containerd[1606]: time="2025-03-18T07:08:16.168224802Z" level=info msg="StartContainer for \"8effac8147bbcbffff305fa4d7d2dee6db955a762f900dd321aa83692a335ccc\" returns successfully" Mar 18 07:08:16.494470 kernel: cryptd: max_cpu_qlen set to 1000 Mar 18 07:08:16.543491 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Mar 18 07:08:17.021359 kubelet[2941]: I0318 07:08:17.021248 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fghfq" podStartSLOduration=6.021219129 podStartE2EDuration="6.021219129s" podCreationTimestamp="2025-03-18 07:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 07:08:17.019366264 +0000 UTC m=+167.775700273" watchObservedRunningTime="2025-03-18 07:08:17.021219129 +0000 UTC m=+167.777553108" Mar 18 07:08:17.815840 systemd[1]: run-containerd-runc-k8s.io-8effac8147bbcbffff305fa4d7d2dee6db955a762f900dd321aa83692a335ccc-runc.Zyh42n.mount: Deactivated successfully. Mar 18 07:08:19.649832 systemd-networkd[1210]: lxc_health: Link UP Mar 18 07:08:19.656630 systemd-networkd[1210]: lxc_health: Gained carrier Mar 18 07:08:21.034589 systemd-networkd[1210]: lxc_health: Gained IPv6LL Mar 18 07:08:22.229637 systemd[1]: run-containerd-runc-k8s.io-8effac8147bbcbffff305fa4d7d2dee6db955a762f900dd321aa83692a335ccc-runc.BzwpHw.mount: Deactivated successfully. Mar 18 07:08:24.416895 systemd[1]: run-containerd-runc-k8s.io-8effac8147bbcbffff305fa4d7d2dee6db955a762f900dd321aa83692a335ccc-runc.Iq85qt.mount: Deactivated successfully. Mar 18 07:08:24.472491 kubelet[2941]: E0318 07:08:24.472176 2941 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46420->127.0.0.1:44267: write tcp 127.0.0.1:46420->127.0.0.1:44267: write: broken pipe Mar 18 07:08:27.001069 sshd[4990]: Connection closed by 172.24.4.1 port 57396 Mar 18 07:08:27.000235 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Mar 18 07:08:27.006419 systemd[1]: sshd@26-172.24.4.138:22-172.24.4.1:57396.service: Deactivated successfully. Mar 18 07:08:27.014794 systemd-logind[1585]: Session 29 logged out. Waiting for processes to exit. Mar 18 07:08:27.015697 systemd[1]: session-29.scope: Deactivated successfully. Mar 18 07:08:27.018656 systemd-logind[1585]: Removed session 29. Mar 18 07:08:29.376509 containerd[1606]: time="2025-03-18T07:08:29.376117057Z" level=info msg="StopPodSandbox for \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\"" Mar 18 07:08:29.376509 containerd[1606]: time="2025-03-18T07:08:29.376313257Z" level=info msg="TearDown network for sandbox \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\" successfully" Mar 18 07:08:29.376509 containerd[1606]: time="2025-03-18T07:08:29.376341991Z" level=info msg="StopPodSandbox for \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\" returns successfully" Mar 18 07:08:29.377930 containerd[1606]: time="2025-03-18T07:08:29.377818333Z" level=info msg="RemovePodSandbox for \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\"" Mar 18 07:08:29.377930 containerd[1606]: time="2025-03-18T07:08:29.377893054Z" level=info msg="Forcibly stopping sandbox \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\"" Mar 18 07:08:29.378379 containerd[1606]: time="2025-03-18T07:08:29.378009373Z" level=info msg="TearDown network for sandbox \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\" successfully" Mar 18 07:08:29.384533 containerd[1606]: time="2025-03-18T07:08:29.384406586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 18 07:08:29.384701 containerd[1606]: time="2025-03-18T07:08:29.384563221Z" level=info msg="RemovePodSandbox \"c44f1259973831ec1ebcbfbb23928cbb6add5613f5dc114e09eb9b9ff439623c\" returns successfully" Mar 18 07:08:29.385965 containerd[1606]: time="2025-03-18T07:08:29.385598571Z" level=info msg="StopPodSandbox for \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\"" Mar 18 07:08:29.385965 containerd[1606]: time="2025-03-18T07:08:29.385801714Z" level=info msg="TearDown network for sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" successfully" Mar 18 07:08:29.385965 containerd[1606]: time="2025-03-18T07:08:29.385833464Z" level=info msg="StopPodSandbox for \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" returns successfully" Mar 18 07:08:29.386805 containerd[1606]: time="2025-03-18T07:08:29.386751884Z" level=info msg="RemovePodSandbox for \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\"" Mar 18 07:08:29.386944 containerd[1606]: time="2025-03-18T07:08:29.386813972Z" level=info msg="Forcibly stopping sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\"" Mar 18 07:08:29.387082 containerd[1606]: time="2025-03-18T07:08:29.386929780Z" level=info msg="TearDown network for sandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" successfully" Mar 18 07:08:29.392563 containerd[1606]: time="2025-03-18T07:08:29.392432407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 18 07:08:29.393695 containerd[1606]: time="2025-03-18T07:08:29.392582169Z" level=info msg="RemovePodSandbox \"ba9dbfd47e641214b38f0e0de4115a3dfa7f3478985dc1483f2a86adfc490c3c\" returns successfully"