May 16 01:35:55.029333 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:19:35 -00 2025 May 16 01:35:55.029360 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 01:35:55.029370 kernel: BIOS-provided physical RAM map: May 16 01:35:55.029378 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 01:35:55.029385 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 01:35:55.029395 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 01:35:55.029403 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 16 01:35:55.029410 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 16 01:35:55.029418 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 01:35:55.029425 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 01:35:55.029433 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 16 01:35:55.029440 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 01:35:55.029448 kernel: NX (Execute Disable) protection: active May 16 01:35:55.029457 kernel: APIC: Static calls initialized May 16 01:35:55.029466 kernel: SMBIOS 3.0.0 present. May 16 01:35:55.029474 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 16 01:35:55.029482 kernel: Hypervisor detected: KVM May 16 01:35:55.029489 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 01:35:55.029497 kernel: kvm-clock: using sched offset of 3621392005 cycles May 16 01:35:55.029507 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 01:35:55.029515 kernel: tsc: Detected 1996.249 MHz processor May 16 01:35:55.029523 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 01:35:55.029532 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 01:35:55.029540 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 16 01:35:55.029548 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 01:35:55.029556 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 01:35:55.029564 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 16 01:35:55.029572 kernel: ACPI: Early table checksum verification disabled May 16 01:35:55.029582 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 16 01:35:55.029590 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 01:35:55.029598 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 01:35:55.029606 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 01:35:55.029614 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 16 01:35:55.029622 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 01:35:55.029629 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 01:35:55.029637 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 16 01:35:55.029647 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 16 01:35:55.029655 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 16 01:35:55.029663 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 16 01:35:55.029671 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 16 01:35:55.029682 kernel: No NUMA configuration found May 16 01:35:55.029690 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 16 01:35:55.029699 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 16 01:35:55.029709 kernel: Zone ranges: May 16 01:35:55.029718 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 01:35:55.029726 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 16 01:35:55.029734 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 16 01:35:55.029742 kernel: Movable zone start for each node May 16 01:35:55.029750 kernel: Early memory node ranges May 16 01:35:55.029759 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 01:35:55.029767 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 16 01:35:55.029777 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 16 01:35:55.029785 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 16 01:35:55.029793 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 01:35:55.029801 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 01:35:55.029810 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 16 01:35:55.029818 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 01:35:55.029826 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 01:35:55.029834 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 01:35:55.029843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 01:35:55.029853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 01:35:55.029861 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 01:35:55.029869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 01:35:55.029878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 01:35:55.029886 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 01:35:55.029894 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 16 01:35:55.029902 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 01:35:55.029910 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 16 01:35:55.029919 kernel: Booting paravirtualized kernel on KVM May 16 01:35:55.029929 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 01:35:55.029937 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 16 01:35:55.029946 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 16 01:35:55.029954 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 16 01:35:55.029962 kernel: pcpu-alloc: [0] 0 1 May 16 01:35:55.029970 kernel: kvm-guest: PV spinlocks disabled, no host support May 16 01:35:55.029980 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 01:35:55.029989 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 01:35:55.029999 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 01:35:55.030008 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 01:35:55.030016 kernel: Fallback order for Node 0: 0 May 16 01:35:55.030025 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 16 01:35:55.030033 kernel: Policy zone: Normal May 16 01:35:55.030041 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 01:35:55.030049 kernel: software IO TLB: area num 2. May 16 01:35:55.030058 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22752K rodata, 42988K init, 2204K bss, 227308K reserved, 0K cma-reserved) May 16 01:35:55.030067 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 16 01:35:55.030077 kernel: ftrace: allocating 37950 entries in 149 pages May 16 01:35:55.030085 kernel: ftrace: allocated 149 pages with 4 groups May 16 01:35:55.030093 kernel: Dynamic Preempt: voluntary May 16 01:35:55.030101 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 01:35:55.030111 kernel: rcu: RCU event tracing is enabled. May 16 01:35:55.030119 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 16 01:35:55.030128 kernel: Trampoline variant of Tasks RCU enabled. May 16 01:35:55.030137 kernel: Rude variant of Tasks RCU enabled. May 16 01:35:55.030145 kernel: Tracing variant of Tasks RCU enabled. May 16 01:35:55.030155 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 01:35:55.030164 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 16 01:35:55.030172 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 16 01:35:55.030180 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 01:35:55.030188 kernel: Console: colour VGA+ 80x25 May 16 01:35:55.030196 kernel: printk: console [tty0] enabled May 16 01:35:55.030205 kernel: printk: console [ttyS0] enabled May 16 01:35:55.030213 kernel: ACPI: Core revision 20230628 May 16 01:35:55.030222 kernel: APIC: Switch to symmetric I/O mode setup May 16 01:35:55.030232 kernel: x2apic enabled May 16 01:35:55.030240 kernel: APIC: Switched APIC routing to: physical x2apic May 16 01:35:55.030248 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 01:35:55.030257 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 01:35:55.030288 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 16 01:35:55.030297 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 16 01:35:55.030305 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 16 01:35:55.030314 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 01:35:55.030322 kernel: Spectre V2 : Mitigation: Retpolines May 16 01:35:55.030333 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 01:35:55.030341 kernel: Speculative Store Bypass: Vulnerable May 16 01:35:55.030349 kernel: x86/fpu: x87 FPU will use FXSAVE May 16 01:35:55.030357 kernel: Freeing SMP alternatives memory: 32K May 16 01:35:55.030366 kernel: pid_max: default: 32768 minimum: 301 May 16 01:35:55.030380 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 01:35:55.030390 kernel: landlock: Up and running. May 16 01:35:55.030399 kernel: SELinux: Initializing. May 16 01:35:55.030408 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 01:35:55.030416 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 01:35:55.030425 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 16 01:35:55.030434 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 01:35:55.030445 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 01:35:55.030454 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 01:35:55.030463 kernel: Performance Events: AMD PMU driver. May 16 01:35:55.030472 kernel: ... version: 0 May 16 01:35:55.030481 kernel: ... bit width: 48 May 16 01:35:55.030491 kernel: ... generic registers: 4 May 16 01:35:55.030500 kernel: ... value mask: 0000ffffffffffff May 16 01:35:55.030508 kernel: ... max period: 00007fffffffffff May 16 01:35:55.030517 kernel: ... fixed-purpose events: 0 May 16 01:35:55.030526 kernel: ... event mask: 000000000000000f May 16 01:35:55.030534 kernel: signal: max sigframe size: 1440 May 16 01:35:55.030543 kernel: rcu: Hierarchical SRCU implementation. May 16 01:35:55.030552 kernel: rcu: Max phase no-delay instances is 400. May 16 01:35:55.030561 kernel: smp: Bringing up secondary CPUs ... May 16 01:35:55.030571 kernel: smpboot: x86: Booting SMP configuration: May 16 01:35:55.030580 kernel: .... node #0, CPUs: #1 May 16 01:35:55.030589 kernel: smp: Brought up 1 node, 2 CPUs May 16 01:35:55.030597 kernel: smpboot: Max logical packages: 2 May 16 01:35:55.030606 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 16 01:35:55.030615 kernel: devtmpfs: initialized May 16 01:35:55.030624 kernel: x86/mm: Memory block size: 128MB May 16 01:35:55.030633 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 01:35:55.030641 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 16 01:35:55.030652 kernel: pinctrl core: initialized pinctrl subsystem May 16 01:35:55.030661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 01:35:55.030669 kernel: audit: initializing netlink subsys (disabled) May 16 01:35:55.030678 kernel: audit: type=2000 audit(1747359354.706:1): state=initialized audit_enabled=0 res=1 May 16 01:35:55.030687 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 01:35:55.030696 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 01:35:55.030705 kernel: cpuidle: using governor menu May 16 01:35:55.030713 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 01:35:55.030722 kernel: dca service started, version 1.12.1 May 16 01:35:55.030733 kernel: PCI: Using configuration type 1 for base access May 16 01:35:55.030742 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 01:35:55.030751 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 01:35:55.030759 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 01:35:55.030768 kernel: ACPI: Added _OSI(Module Device) May 16 01:35:55.030776 kernel: ACPI: Added _OSI(Processor Device) May 16 01:35:55.030785 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 01:35:55.030794 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 01:35:55.030802 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 01:35:55.030813 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 01:35:55.030821 kernel: ACPI: Interpreter enabled May 16 01:35:55.030830 kernel: ACPI: PM: (supports S0 S3 S5) May 16 01:35:55.030838 kernel: ACPI: Using IOAPIC for interrupt routing May 16 01:35:55.030847 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 01:35:55.030856 kernel: PCI: Using E820 reservations for host bridge windows May 16 01:35:55.030865 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 16 01:35:55.030873 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 01:35:55.031027 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 16 01:35:55.031128 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 16 01:35:55.031221 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 16 01:35:55.031235 kernel: acpiphp: Slot [3] registered May 16 01:35:55.031244 kernel: acpiphp: Slot [4] registered May 16 01:35:55.031252 kernel: acpiphp: Slot [5] registered May 16 01:35:55.034091 kernel: acpiphp: Slot [6] registered May 16 01:35:55.034109 kernel: acpiphp: Slot [7] registered May 16 01:35:55.034119 kernel: acpiphp: Slot [8] registered May 16 01:35:55.034132 kernel: acpiphp: Slot [9] registered May 16 01:35:55.034141 kernel: acpiphp: Slot [10] registered May 16 01:35:55.034150 kernel: acpiphp: Slot [11] registered May 16 01:35:55.034159 kernel: acpiphp: Slot [12] registered May 16 01:35:55.034168 kernel: acpiphp: Slot [13] registered May 16 01:35:55.034176 kernel: acpiphp: Slot [14] registered May 16 01:35:55.034185 kernel: acpiphp: Slot [15] registered May 16 01:35:55.034194 kernel: acpiphp: Slot [16] registered May 16 01:35:55.034203 kernel: acpiphp: Slot [17] registered May 16 01:35:55.034213 kernel: acpiphp: Slot [18] registered May 16 01:35:55.034222 kernel: acpiphp: Slot [19] registered May 16 01:35:55.034231 kernel: acpiphp: Slot [20] registered May 16 01:35:55.034239 kernel: acpiphp: Slot [21] registered May 16 01:35:55.034248 kernel: acpiphp: Slot [22] registered May 16 01:35:55.034257 kernel: acpiphp: Slot [23] registered May 16 01:35:55.034285 kernel: acpiphp: Slot [24] registered May 16 01:35:55.034295 kernel: acpiphp: Slot [25] registered May 16 01:35:55.034304 kernel: acpiphp: Slot [26] registered May 16 01:35:55.034313 kernel: acpiphp: Slot [27] registered May 16 01:35:55.034324 kernel: acpiphp: Slot [28] registered May 16 01:35:55.034333 kernel: acpiphp: Slot [29] registered May 16 01:35:55.034341 kernel: acpiphp: Slot [30] registered May 16 01:35:55.034350 kernel: acpiphp: Slot [31] registered May 16 01:35:55.034358 kernel: PCI host bridge to bus 0000:00 May 16 01:35:55.034471 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 01:35:55.034557 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 01:35:55.034639 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 01:35:55.034740 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 01:35:55.034909 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 16 01:35:55.035043 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 01:35:55.035228 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 16 01:35:55.036457 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 16 01:35:55.036589 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 16 01:35:55.036695 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 16 01:35:55.036796 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 16 01:35:55.036892 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 16 01:35:55.036987 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 16 01:35:55.037083 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 16 01:35:55.037186 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 16 01:35:55.038306 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 16 01:35:55.038423 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 16 01:35:55.038525 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 16 01:35:55.038650 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 16 01:35:55.038744 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 16 01:35:55.038833 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 16 01:35:55.038921 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 16 01:35:55.039015 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 01:35:55.039114 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 16 01:35:55.039205 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 16 01:35:55.041392 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 16 01:35:55.041487 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 16 01:35:55.041577 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 16 01:35:55.041675 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 16 01:35:55.041771 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 16 01:35:55.041860 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 16 01:35:55.041948 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 16 01:35:55.042045 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 16 01:35:55.042135 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 16 01:35:55.042223 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 16 01:35:55.042372 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 16 01:35:55.042469 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 16 01:35:55.042558 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 16 01:35:55.042649 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 16 01:35:55.042662 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 01:35:55.042672 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 01:35:55.042681 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 01:35:55.042690 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 01:35:55.042699 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 16 01:35:55.042711 kernel: iommu: Default domain type: Translated May 16 01:35:55.042720 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 01:35:55.042729 kernel: PCI: Using ACPI for IRQ routing May 16 01:35:55.042738 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 01:35:55.042747 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 01:35:55.042756 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 16 01:35:55.042844 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 16 01:35:55.042933 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 16 01:35:55.043023 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 01:35:55.043040 kernel: vgaarb: loaded May 16 01:35:55.043049 kernel: clocksource: Switched to clocksource kvm-clock May 16 01:35:55.043058 kernel: VFS: Disk quotas dquot_6.6.0 May 16 01:35:55.043067 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 01:35:55.043076 kernel: pnp: PnP ACPI init May 16 01:35:55.043167 kernel: pnp 00:03: [dma 2] May 16 01:35:55.043182 kernel: pnp: PnP ACPI: found 5 devices May 16 01:35:55.043191 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 01:35:55.043204 kernel: NET: Registered PF_INET protocol family May 16 01:35:55.043213 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 01:35:55.043222 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 01:35:55.043231 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 01:35:55.043240 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 01:35:55.043249 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 01:35:55.043258 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 01:35:55.044123 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 01:35:55.044134 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 01:35:55.044149 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 01:35:55.044158 kernel: NET: Registered PF_XDP protocol family May 16 01:35:55.044256 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 01:35:55.044402 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 01:35:55.044498 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 01:35:55.044582 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 16 01:35:55.044665 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 16 01:35:55.044763 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 16 01:35:55.044866 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 16 01:35:55.044880 kernel: PCI: CLS 0 bytes, default 64 May 16 01:35:55.044890 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 16 01:35:55.044901 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 16 01:35:55.044910 kernel: Initialise system trusted keyrings May 16 01:35:55.044920 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 01:35:55.044929 kernel: Key type asymmetric registered May 16 01:35:55.044939 kernel: Asymmetric key parser 'x509' registered May 16 01:35:55.044949 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 01:35:55.044962 kernel: io scheduler mq-deadline registered May 16 01:35:55.044971 kernel: io scheduler kyber registered May 16 01:35:55.044981 kernel: io scheduler bfq registered May 16 01:35:55.044990 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 01:35:55.045000 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 16 01:35:55.045010 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 16 01:35:55.045020 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 16 01:35:55.045029 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 16 01:35:55.045039 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 01:35:55.045051 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 01:35:55.045060 kernel: random: crng init done May 16 01:35:55.045070 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 01:35:55.045079 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 01:35:55.045089 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 01:35:55.045191 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 01:35:55.045208 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 01:35:55.047006 kernel: rtc_cmos 00:04: registered as rtc0 May 16 01:35:55.047110 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T01:35:54 UTC (1747359354) May 16 01:35:55.047192 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 01:35:55.047205 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 01:35:55.047215 kernel: NET: Registered PF_INET6 protocol family May 16 01:35:55.047224 kernel: Segment Routing with IPv6 May 16 01:35:55.047232 kernel: In-situ OAM (IOAM) with IPv6 May 16 01:35:55.047241 kernel: NET: Registered PF_PACKET protocol family May 16 01:35:55.047250 kernel: Key type dns_resolver registered May 16 01:35:55.047259 kernel: IPI shorthand broadcast: enabled May 16 01:35:55.047323 kernel: sched_clock: Marking stable (985007217, 171086847)->(1182227477, -26133413) May 16 01:35:55.047332 kernel: registered taskstats version 1 May 16 01:35:55.047341 kernel: Loading compiled-in X.509 certificates May 16 01:35:55.047350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 563478d245b598189519397611f5bddee97f3fc1' May 16 01:35:55.047359 kernel: Key type .fscrypt registered May 16 01:35:55.047368 kernel: Key type fscrypt-provisioning registered May 16 01:35:55.047377 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 01:35:55.047386 kernel: ima: Allocated hash algorithm: sha1 May 16 01:35:55.047396 kernel: ima: No architecture policies found May 16 01:35:55.047405 kernel: clk: Disabling unused clocks May 16 01:35:55.047414 kernel: Freeing unused kernel image (initmem) memory: 42988K May 16 01:35:55.047423 kernel: Write protecting the kernel read-only data: 36864k May 16 01:35:55.047431 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 16 01:35:55.047440 kernel: Run /init as init process May 16 01:35:55.047449 kernel: with arguments: May 16 01:35:55.047457 kernel: /init May 16 01:35:55.047466 kernel: with environment: May 16 01:35:55.047474 kernel: HOME=/ May 16 01:35:55.047485 kernel: TERM=linux May 16 01:35:55.047493 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 01:35:55.047505 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 01:35:55.047517 systemd[1]: Detected virtualization kvm. May 16 01:35:55.047527 systemd[1]: Detected architecture x86-64. May 16 01:35:55.047537 systemd[1]: Running in initrd. May 16 01:35:55.047546 systemd[1]: No hostname configured, using default hostname. May 16 01:35:55.047557 systemd[1]: Hostname set to . May 16 01:35:55.047567 systemd[1]: Initializing machine ID from VM UUID. May 16 01:35:55.047577 systemd[1]: Queued start job for default target initrd.target. May 16 01:35:55.047586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 01:35:55.047596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 01:35:55.047606 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 01:35:55.047616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 01:35:55.047635 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 01:35:55.047647 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 01:35:55.047658 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 01:35:55.047668 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 01:35:55.047679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 01:35:55.047690 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 01:35:55.047700 systemd[1]: Reached target paths.target - Path Units. May 16 01:35:55.047710 systemd[1]: Reached target slices.target - Slice Units. May 16 01:35:55.047719 systemd[1]: Reached target swap.target - Swaps. May 16 01:35:55.047729 systemd[1]: Reached target timers.target - Timer Units. May 16 01:35:55.047739 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 01:35:55.047749 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 01:35:55.047759 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 01:35:55.047769 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 16 01:35:55.047780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 01:35:55.047791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 01:35:55.047801 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 01:35:55.047810 systemd[1]: Reached target sockets.target - Socket Units. May 16 01:35:55.047820 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 01:35:55.047830 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 01:35:55.047840 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 01:35:55.047849 systemd[1]: Starting systemd-fsck-usr.service... May 16 01:35:55.047861 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 01:35:55.047871 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 01:35:55.047881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 01:35:55.047911 systemd-journald[185]: Collecting audit messages is disabled. May 16 01:35:55.047939 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 01:35:55.047949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 01:35:55.047959 systemd[1]: Finished systemd-fsck-usr.service. May 16 01:35:55.047970 systemd-journald[185]: Journal started May 16 01:35:55.047995 systemd-journald[185]: Runtime Journal (/run/log/journal/24e51ecc260b44b6840b602da112b743) is 8.0M, max 78.3M, 70.3M free. May 16 01:35:55.035183 systemd-modules-load[186]: Inserted module 'overlay' May 16 01:35:55.098549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 01:35:55.098581 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 01:35:55.098595 kernel: Bridge firewalling registered May 16 01:35:55.098606 systemd[1]: Started systemd-journald.service - Journal Service. May 16 01:35:55.070588 systemd-modules-load[186]: Inserted module 'br_netfilter' May 16 01:35:55.106604 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 01:35:55.108428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 01:35:55.111790 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 01:35:55.119392 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 01:35:55.122389 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 01:35:55.123893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 01:35:55.131167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 01:35:55.146526 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 01:35:55.148122 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 01:35:55.154404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 01:35:55.161521 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 01:35:55.167412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 01:35:55.180783 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 01:35:55.193608 dracut-cmdline[222]: dracut-dracut-053 May 16 01:35:55.193646 systemd-resolved[216]: Positive Trust Anchors: May 16 01:35:55.193656 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 01:35:55.193697 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 01:35:55.196936 systemd-resolved[216]: Defaulting to hostname 'linux'. May 16 01:35:55.201603 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 01:35:55.198725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 01:35:55.199812 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 01:35:55.278302 kernel: SCSI subsystem initialized May 16 01:35:55.289361 kernel: Loading iSCSI transport class v2.0-870. May 16 01:35:55.301477 kernel: iscsi: registered transport (tcp) May 16 01:35:55.323508 kernel: iscsi: registered transport (qla4xxx) May 16 01:35:55.323579 kernel: QLogic iSCSI HBA Driver May 16 01:35:55.384818 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 01:35:55.395649 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 01:35:55.446450 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 01:35:55.446514 kernel: device-mapper: uevent: version 1.0.3 May 16 01:35:55.448597 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 01:35:55.508314 kernel: raid6: sse2x4 gen() 5182 MB/s May 16 01:35:55.527318 kernel: raid6: sse2x2 gen() 5975 MB/s May 16 01:35:55.545769 kernel: raid6: sse2x1 gen() 9467 MB/s May 16 01:35:55.545841 kernel: raid6: using algorithm sse2x1 gen() 9467 MB/s May 16 01:35:55.564702 kernel: raid6: .... xor() 7374 MB/s, rmw enabled May 16 01:35:55.564779 kernel: raid6: using ssse3x2 recovery algorithm May 16 01:35:55.586530 kernel: xor: measuring software checksum speed May 16 01:35:55.586598 kernel: prefetch64-sse : 18474 MB/sec May 16 01:35:55.590006 kernel: generic_sse : 15468 MB/sec May 16 01:35:55.590066 kernel: xor: using function: prefetch64-sse (18474 MB/sec) May 16 01:35:55.775340 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 01:35:55.792717 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 01:35:55.802535 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 01:35:55.815983 systemd-udevd[405]: Using default interface naming scheme 'v255'. May 16 01:35:55.820354 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 01:35:55.829763 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 01:35:55.859681 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation May 16 01:35:55.916815 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 01:35:55.925535 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 01:35:56.007762 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 01:35:56.018655 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 01:35:56.067112 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 01:35:56.072658 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 01:35:56.073905 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 01:35:56.076887 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 01:35:56.085451 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 01:35:56.104109 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 01:35:56.112292 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 16 01:35:56.122329 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 16 01:35:56.126794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 01:35:56.127924 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 01:35:56.129935 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 01:35:56.130649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 01:35:56.131382 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 01:35:56.148370 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 01:35:56.148394 kernel: GPT:17805311 != 20971519 May 16 01:35:56.148407 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 01:35:56.148419 kernel: GPT:17805311 != 20971519 May 16 01:35:56.148431 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 01:35:56.148442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 01:35:56.148454 kernel: libata version 3.00 loaded. May 16 01:35:56.132636 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 01:35:56.145064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 01:35:56.163589 kernel: ata_piix 0000:00:01.1: version 2.13 May 16 01:35:56.163799 kernel: scsi host0: ata_piix May 16 01:35:56.163943 kernel: scsi host1: ata_piix May 16 01:35:56.164105 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 16 01:35:56.164128 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 16 01:35:56.194294 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) May 16 01:35:56.197285 kernel: BTRFS: device fsid da1480a3-a7d8-4e12-bbe1-1257540eb9ae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (460) May 16 01:35:56.210815 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 01:35:56.231101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 01:35:56.237788 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 01:35:56.242594 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 01:35:56.243204 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 01:35:56.249461 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 01:35:56.257457 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 01:35:56.261330 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 01:35:56.273485 disk-uuid[503]: Primary Header is updated. May 16 01:35:56.273485 disk-uuid[503]: Secondary Entries is updated. May 16 01:35:56.273485 disk-uuid[503]: Secondary Header is updated. May 16 01:35:56.284648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 01:35:56.283986 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 01:35:57.300373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 01:35:57.301643 disk-uuid[509]: The operation has completed successfully. May 16 01:35:57.374033 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 01:35:57.374325 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 01:35:57.405384 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 01:35:57.413031 sh[525]: Success May 16 01:35:57.445343 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 16 01:35:57.533033 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 01:35:57.553464 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 01:35:57.560677 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 01:35:57.597228 kernel: BTRFS info (device dm-0): first mount of filesystem da1480a3-a7d8-4e12-bbe1-1257540eb9ae May 16 01:35:57.597331 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 01:35:57.600799 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 01:35:57.604589 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 01:35:57.607386 kernel: BTRFS info (device dm-0): using free space tree May 16 01:35:57.628571 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 01:35:57.630427 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 01:35:57.638488 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 01:35:57.647549 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 01:35:57.672354 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 01:35:57.672488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 01:35:57.674329 kernel: BTRFS info (device vda6): using free space tree May 16 01:35:57.686342 kernel: BTRFS info (device vda6): auto enabling async discard May 16 01:35:57.706766 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 01:35:57.713075 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 01:35:57.734759 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 01:35:57.745553 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 01:35:57.794774 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 01:35:57.802437 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 01:35:57.825049 systemd-networkd[708]: lo: Link UP May 16 01:35:57.825061 systemd-networkd[708]: lo: Gained carrier May 16 01:35:57.826238 systemd-networkd[708]: Enumeration completed May 16 01:35:57.827233 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 01:35:57.827237 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 01:35:57.828669 systemd-networkd[708]: eth0: Link UP May 16 01:35:57.828672 systemd-networkd[708]: eth0: Gained carrier May 16 01:35:57.828680 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 01:35:57.830531 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 01:35:57.836768 systemd[1]: Reached target network.target - Network. May 16 01:35:57.842312 systemd-networkd[708]: eth0: DHCPv4 address 172.24.4.31/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 16 01:35:57.903526 ignition[640]: Ignition 2.20.0 May 16 01:35:57.904375 ignition[640]: Stage: fetch-offline May 16 01:35:57.906549 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 01:35:57.904466 ignition[640]: no configs at "/usr/lib/ignition/base.d" May 16 01:35:57.908934 systemd-resolved[216]: Detected conflict on linux IN A 172.24.4.31 May 16 01:35:57.904479 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 01:35:57.908944 systemd-resolved[216]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 16 01:35:57.904640 ignition[640]: parsed url from cmdline: "" May 16 01:35:57.913576 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 16 01:35:57.904644 ignition[640]: no config URL provided May 16 01:35:57.904651 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" May 16 01:35:57.904664 ignition[640]: no config at "/usr/lib/ignition/user.ign" May 16 01:35:57.904671 ignition[640]: failed to fetch config: resource requires networking May 16 01:35:57.904910 ignition[640]: Ignition finished successfully May 16 01:35:57.927684 ignition[717]: Ignition 2.20.0 May 16 01:35:57.927705 ignition[717]: Stage: fetch May 16 01:35:57.927925 ignition[717]: no configs at "/usr/lib/ignition/base.d" May 16 01:35:57.927957 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 01:35:57.928090 ignition[717]: parsed url from cmdline: "" May 16 01:35:57.928094 ignition[717]: no config URL provided May 16 01:35:57.928100 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" May 16 01:35:57.928132 ignition[717]: no config at "/usr/lib/ignition/user.ign" May 16 01:35:57.928244 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 16 01:35:57.928289 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 16 01:35:57.928347 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 16 01:35:58.173973 ignition[717]: GET result: OK May 16 01:35:58.174127 ignition[717]: parsing config with SHA512: 69580f57d0fe8ba70256317c9798761bb5fcbcc7403724369b6746cab59ad4e4933089a69ea1ffbef474614d6fac10f9b0c0b095da3cae00c60f643ea614c856 May 16 01:35:58.187676 unknown[717]: fetched base config from "system" May 16 01:35:58.187711 unknown[717]: fetched base config from "system" May 16 01:35:58.189173 ignition[717]: fetch: fetch complete May 16 01:35:58.187726 unknown[717]: fetched user config from "openstack" May 16 01:35:58.189217 ignition[717]: fetch: fetch passed May 16 01:35:58.192796 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 16 01:35:58.189432 ignition[717]: Ignition finished successfully May 16 01:35:58.202698 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 01:35:58.241561 ignition[724]: Ignition 2.20.0 May 16 01:35:58.241589 ignition[724]: Stage: kargs May 16 01:35:58.242148 ignition[724]: no configs at "/usr/lib/ignition/base.d" May 16 01:35:58.242178 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 01:35:58.247968 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 01:35:58.244746 ignition[724]: kargs: kargs passed May 16 01:35:58.244853 ignition[724]: Ignition finished successfully May 16 01:35:58.262723 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 01:35:58.290840 ignition[730]: Ignition 2.20.0 May 16 01:35:58.290869 ignition[730]: Stage: disks May 16 01:35:58.291347 ignition[730]: no configs at "/usr/lib/ignition/base.d" May 16 01:35:58.291378 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 01:35:58.296373 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 01:35:58.293664 ignition[730]: disks: disks passed May 16 01:35:58.298743 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 01:35:58.293742 ignition[730]: Ignition finished successfully May 16 01:35:58.300571 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 01:35:58.302287 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 01:35:58.304479 systemd[1]: Reached target sysinit.target - System Initialization. May 16 01:35:58.306144 systemd[1]: Reached target basic.target - Basic System. May 16 01:35:58.314451 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 01:35:58.342102 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 16 01:35:58.354252 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 01:35:58.362803 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 01:35:58.509294 kernel: EXT4-fs (vda9): mounted filesystem 13a141f5-2ff0-46d9-bee3-974c86536128 r/w with ordered data mode. Quota mode: none. May 16 01:35:58.510743 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 01:35:58.513025 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 01:35:58.523451 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 01:35:58.526763 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 01:35:58.528831 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 01:35:58.532572 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 16 01:35:58.535990 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 01:35:58.556926 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) May 16 01:35:58.556954 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 01:35:58.556966 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 01:35:58.556977 kernel: BTRFS info (device vda6): using free space tree May 16 01:35:58.556989 kernel: BTRFS info (device vda6): auto enabling async discard May 16 01:35:58.536086 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 01:35:58.543429 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 01:35:58.563593 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 01:35:58.566721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 01:35:58.699126 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory May 16 01:35:58.715774 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory May 16 01:35:58.722933 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory May 16 01:35:58.732879 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory May 16 01:35:58.895184 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 01:35:58.905460 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 01:35:58.915674 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 01:35:58.933076 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 01:35:58.942340 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 01:35:58.967087 ignition[864]: INFO : Ignition 2.20.0 May 16 01:35:58.967087 ignition[864]: INFO : Stage: mount May 16 01:35:58.971755 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 01:35:58.971755 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 01:35:58.971755 ignition[864]: INFO : mount: mount passed May 16 01:35:58.971755 ignition[864]: INFO : Ignition finished successfully May 16 01:35:58.969783 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 01:35:58.977244 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 01:35:59.328687 systemd-networkd[708]: eth0: Gained IPv6LL May 16 01:36:05.799755 coreos-metadata[749]: May 16 01:36:05.799 WARN failed to locate config-drive, using the metadata service API instead May 16 01:36:05.840720 coreos-metadata[749]: May 16 01:36:05.840 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 16 01:36:05.853698 coreos-metadata[749]: May 16 01:36:05.853 INFO Fetch successful May 16 01:36:05.855200 coreos-metadata[749]: May 16 01:36:05.854 INFO wrote hostname ci-4152-2-3-n-26e690edb8.novalocal to /sysroot/etc/hostname May 16 01:36:05.857790 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 16 01:36:05.858021 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 16 01:36:05.872562 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 01:36:05.895715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 01:36:05.916404 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (880) May 16 01:36:05.923631 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 01:36:05.923702 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 01:36:05.927837 kernel: BTRFS info (device vda6): using free space tree May 16 01:36:05.939375 kernel: BTRFS info (device vda6): auto enabling async discard May 16 01:36:05.944221 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 01:36:05.982241 ignition[898]: INFO : Ignition 2.20.0 May 16 01:36:05.982241 ignition[898]: INFO : Stage: files May 16 01:36:05.985208 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 01:36:05.985208 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 01:36:05.985208 ignition[898]: DEBUG : files: compiled without relabeling support, skipping May 16 01:36:05.985208 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 01:36:05.985208 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 01:36:05.991179 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 01:36:05.991179 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 01:36:05.991179 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 01:36:05.989547 unknown[898]: wrote ssh authorized keys file for user: core May 16 01:36:05.994307 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 01:36:05.994307 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 16 01:36:06.059233 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 01:36:06.351415 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 01:36:06.351415 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 01:36:06.351415 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 01:36:07.036757 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 01:36:07.444148 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 01:36:07.444148 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 01:36:07.448643 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 16 01:36:07.968638 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 01:36:09.558699 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 01:36:09.558699 ignition[898]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 01:36:09.566390 ignition[898]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 01:36:09.566390 ignition[898]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 01:36:09.566390 ignition[898]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 01:36:09.566390 ignition[898]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 16 01:36:09.566390 ignition[898]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 16 01:36:09.566390 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 01:36:09.566390 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 01:36:09.566390 ignition[898]: INFO : files: files passed May 16 01:36:09.566390 ignition[898]: INFO : Ignition finished successfully May 16 01:36:09.565192 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 01:36:09.574774 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 01:36:09.579392 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 01:36:09.582085 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 01:36:09.582168 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 01:36:09.600367 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 01:36:09.600367 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 01:36:09.602055 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 01:36:09.605189 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 01:36:09.606040 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 01:36:09.626590 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 01:36:09.660788 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 01:36:09.661656 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 01:36:09.664704 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 01:36:09.666175 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 01:36:09.668153 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 01:36:09.675538 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 01:36:09.693638 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 01:36:09.703430 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 01:36:09.739786 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 01:36:09.741568 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 01:36:09.744768 systemd[1]: Stopped target timers.target - Timer Units. May 16 01:36:09.747705 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 01:36:09.747982 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 01:36:09.751222 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 01:36:09.753200 systemd[1]: Stopped target basic.target - Basic System. May 16 01:36:09.756159 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 01:36:09.758827 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 01:36:09.761519 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 01:36:09.764526 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 01:36:09.767565 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 01:36:09.770085 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 01:36:09.772576 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 01:36:09.775076 systemd[1]: Stopped target swap.target - Swaps. May 16 01:36:09.777360 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 01:36:09.777687 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 01:36:09.780339 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 01:36:09.782142 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 01:36:09.784343 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 01:36:09.784667 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 01:36:09.786914 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 01:36:09.787194 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 01:36:09.790345 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 01:36:09.790647 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 01:36:09.793378 systemd[1]: ignition-files.service: Deactivated successfully. May 16 01:36:09.793643 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 01:36:09.803762 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 01:36:09.817348 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 01:36:09.817910 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 01:36:09.819577 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 01:36:09.824428 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 01:36:09.824787 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 01:36:09.830985 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 01:36:09.831081 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 01:36:09.840370 ignition[951]: INFO : Ignition 2.20.0 May 16 01:36:09.840370 ignition[951]: INFO : Stage: umount May 16 01:36:09.847355 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 01:36:09.847355 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 01:36:09.847355 ignition[951]: INFO : umount: umount passed May 16 01:36:09.847355 ignition[951]: INFO : Ignition finished successfully May 16 01:36:09.844543 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 01:36:09.844820 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 01:36:09.845932 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 01:36:09.845973 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 01:36:09.846502 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 01:36:09.846544 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 01:36:09.847076 systemd[1]: ignition-fetch.service: Deactivated successfully. May 16 01:36:09.847114 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 16 01:36:09.849385 systemd[1]: Stopped target network.target - Network. May 16 01:36:09.850187 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 01:36:09.850231 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 01:36:09.850776 systemd[1]: Stopped target paths.target - Path Units. May 16 01:36:09.851210 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 01:36:09.856303 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 01:36:09.857084 systemd[1]: Stopped target slices.target - Slice Units. May 16 01:36:09.857652 systemd[1]: Stopped target sockets.target - Socket Units. May 16 01:36:09.858848 systemd[1]: iscsid.socket: Deactivated successfully. May 16 01:36:09.858882 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 01:36:09.859414 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 01:36:09.859447 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 01:36:09.861921 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 01:36:09.861966 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 01:36:09.863097 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 01:36:09.863138 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 01:36:09.864548 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 01:36:09.865852 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 01:36:09.867787 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 01:36:09.868361 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 01:36:09.868446 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 01:36:09.869426 systemd-networkd[708]: eth0: DHCPv6 lease lost May 16 01:36:09.870878 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 01:36:09.870951 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 01:36:09.873609 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 01:36:09.873743 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 01:36:09.875172 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 01:36:09.875294 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 01:36:09.878031 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 01:36:09.878422 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 01:36:09.885391 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 01:36:09.886201 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 01:36:09.886255 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 01:36:09.886837 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 01:36:09.886877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 01:36:09.887441 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 01:36:09.887481 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 01:36:09.888633 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 01:36:09.888674 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 01:36:09.889835 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 01:36:09.898886 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 01:36:09.899006 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 01:36:09.900731 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 01:36:09.900885 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 01:36:09.902621 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 01:36:09.902676 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 01:36:09.903983 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 01:36:09.904028 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 01:36:09.905558 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 01:36:09.905599 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 01:36:09.907576 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 01:36:09.907616 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 01:36:09.908717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 01:36:09.908760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 01:36:09.919411 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 01:36:09.920089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 01:36:09.920142 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 01:36:09.921445 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 01:36:09.921486 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 01:36:09.923912 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 01:36:09.923955 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 01:36:09.925353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 01:36:09.925393 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 01:36:09.926962 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 01:36:09.927052 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 01:36:09.927974 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 01:36:09.933527 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 01:36:09.939509 systemd[1]: Switching root. May 16 01:36:09.974986 systemd-journald[185]: Journal stopped May 16 01:36:11.744341 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 16 01:36:11.744390 kernel: SELinux: policy capability network_peer_controls=1 May 16 01:36:11.744409 kernel: SELinux: policy capability open_perms=1 May 16 01:36:11.744421 kernel: SELinux: policy capability extended_socket_class=1 May 16 01:36:11.744432 kernel: SELinux: policy capability always_check_network=0 May 16 01:36:11.744444 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 01:36:11.744456 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 01:36:11.744467 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 01:36:11.744480 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 01:36:11.744493 kernel: audit: type=1403 audit(1747359370.761:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 01:36:11.744505 systemd[1]: Successfully loaded SELinux policy in 79.930ms. May 16 01:36:11.744527 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.320ms. May 16 01:36:11.744540 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 01:36:11.744553 systemd[1]: Detected virtualization kvm. May 16 01:36:11.744565 systemd[1]: Detected architecture x86-64. May 16 01:36:11.744577 systemd[1]: Detected first boot. May 16 01:36:11.744589 systemd[1]: Hostname set to . May 16 01:36:11.744602 systemd[1]: Initializing machine ID from VM UUID. May 16 01:36:11.744614 zram_generator::config[994]: No configuration found. May 16 01:36:11.744630 systemd[1]: Populated /etc with preset unit settings. May 16 01:36:11.744642 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 01:36:11.744654 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 01:36:11.744670 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 01:36:11.744683 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 01:36:11.744695 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 01:36:11.744707 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 01:36:11.744719 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 01:36:11.744734 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 01:36:11.744746 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 01:36:11.744759 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 01:36:11.744772 systemd[1]: Created slice user.slice - User and Session Slice. May 16 01:36:11.744784 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 01:36:11.744796 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 01:36:11.744808 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 01:36:11.744821 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 01:36:11.744833 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 01:36:11.744848 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 01:36:11.744861 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 01:36:11.744876 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 01:36:11.744888 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 01:36:11.744901 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 01:36:11.744913 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 01:36:11.744927 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 01:36:11.744940 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 01:36:11.744952 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 01:36:11.744964 systemd[1]: Reached target slices.target - Slice Units. May 16 01:36:11.744976 systemd[1]: Reached target swap.target - Swaps. May 16 01:36:11.744989 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 01:36:11.745002 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 01:36:11.745014 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 01:36:11.745029 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 01:36:11.745043 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 01:36:11.745055 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 01:36:11.745068 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 01:36:11.745080 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 01:36:11.745092 systemd[1]: Mounting media.mount - External Media Directory... May 16 01:36:11.745105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 01:36:11.745117 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 01:36:11.745129 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 01:36:11.745141 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 01:36:11.745155 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 01:36:11.745168 systemd[1]: Reached target machines.target - Containers. May 16 01:36:11.745180 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 01:36:11.745192 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 01:36:11.745204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 01:36:11.745216 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 01:36:11.745228 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 01:36:11.745240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 01:36:11.745254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 01:36:11.745281 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 01:36:11.745294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 01:36:11.745307 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 01:36:11.745319 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 01:36:11.745331 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 01:36:11.745343 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 01:36:11.745355 systemd[1]: Stopped systemd-fsck-usr.service. May 16 01:36:11.745367 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 01:36:11.745382 kernel: fuse: init (API version 7.39) May 16 01:36:11.745394 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 01:36:11.745406 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 01:36:11.745419 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 01:36:11.745432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 01:36:11.745445 systemd[1]: verity-setup.service: Deactivated successfully. May 16 01:36:11.745457 systemd[1]: Stopped verity-setup.service. May 16 01:36:11.745469 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 01:36:11.745481 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 01:36:11.745496 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 01:36:11.745508 systemd[1]: Mounted media.mount - External Media Directory. May 16 01:36:11.745520 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 01:36:11.745532 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 01:36:11.745546 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 01:36:11.745559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 01:36:11.745571 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 01:36:11.745583 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 01:36:11.745595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 01:36:11.745608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 01:36:11.745624 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 01:36:11.745637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 01:36:11.745649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 01:36:11.745677 systemd-journald[1090]: Collecting audit messages is disabled. May 16 01:36:11.745703 systemd-journald[1090]: Journal started May 16 01:36:11.745728 systemd-journald[1090]: Runtime Journal (/run/log/journal/24e51ecc260b44b6840b602da112b743) is 8.0M, max 78.3M, 70.3M free. May 16 01:36:11.386795 systemd[1]: Queued start job for default target multi-user.target. May 16 01:36:11.406975 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 01:36:11.407422 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 01:36:11.761281 systemd[1]: Started systemd-journald.service - Journal Service. May 16 01:36:11.761155 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 01:36:11.761322 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 01:36:11.762090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 01:36:11.762874 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 01:36:11.763657 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 01:36:11.774373 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 01:36:11.778388 kernel: ACPI: bus type drm_connector registered May 16 01:36:11.781385 kernel: loop: module loaded May 16 01:36:11.781423 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 01:36:11.783356 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 01:36:11.783904 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 01:36:11.783933 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 01:36:11.787680 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 16 01:36:11.796097 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 01:36:11.799392 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 01:36:11.800136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 01:36:11.806438 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 01:36:11.810287 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 01:36:11.810901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 01:36:11.813439 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 01:36:11.825466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 01:36:11.831592 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 01:36:11.835435 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 01:36:11.837960 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 01:36:11.838151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 01:36:11.839083 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 01:36:11.839309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 01:36:11.842526 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 01:36:11.847760 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 01:36:11.848969 systemd-journald[1090]: Time spent on flushing to /var/log/journal/24e51ecc260b44b6840b602da112b743 is 65.798ms for 948 entries. May 16 01:36:11.848969 systemd-journald[1090]: System Journal (/var/log/journal/24e51ecc260b44b6840b602da112b743) is 8.0M, max 584.8M, 576.8M free. May 16 01:36:11.927287 systemd-journald[1090]: Received client request to flush runtime journal. May 16 01:36:11.927327 kernel: loop0: detected capacity change from 0 to 138184 May 16 01:36:11.848616 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 01:36:11.851708 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 01:36:11.853712 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 01:36:11.875089 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 01:36:11.885430 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 16 01:36:11.893430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 01:36:11.900576 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 01:36:11.901703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 01:36:11.928735 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 01:36:11.937697 udevadm[1138]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 01:36:11.944481 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. May 16 01:36:11.944502 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. May 16 01:36:11.950958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 01:36:11.960031 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 01:36:11.970215 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 01:36:11.973310 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 16 01:36:11.998354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 01:36:12.026441 kernel: loop1: detected capacity change from 0 to 221472 May 16 01:36:12.034730 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 01:36:12.042439 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 01:36:12.088670 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. May 16 01:36:12.088691 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. May 16 01:36:12.093498 kernel: loop2: detected capacity change from 0 to 8 May 16 01:36:12.100512 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 01:36:12.122815 kernel: loop3: detected capacity change from 0 to 140992 May 16 01:36:12.210304 kernel: loop4: detected capacity change from 0 to 138184 May 16 01:36:12.249350 kernel: loop5: detected capacity change from 0 to 221472 May 16 01:36:12.300083 kernel: loop6: detected capacity change from 0 to 8 May 16 01:36:12.307349 kernel: loop7: detected capacity change from 0 to 140992 May 16 01:36:12.378473 (sd-merge)[1156]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 16 01:36:12.381762 (sd-merge)[1156]: Merged extensions into '/usr'. May 16 01:36:12.391881 systemd[1]: Reloading requested from client PID 1125 ('systemd-sysext') (unit systemd-sysext.service)... May 16 01:36:12.391893 systemd[1]: Reloading... May 16 01:36:12.488371 zram_generator::config[1183]: No configuration found. May 16 01:36:12.730997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 01:36:12.777984 ldconfig[1120]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 01:36:12.802805 systemd[1]: Reloading finished in 410 ms. May 16 01:36:12.831584 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 01:36:12.832704 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 01:36:12.833702 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 01:36:12.844448 systemd[1]: Starting ensure-sysext.service... May 16 01:36:12.846435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 01:36:12.851109 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 01:36:12.868461 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... May 16 01:36:12.868480 systemd[1]: Reloading... May 16 01:36:12.890775 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 01:36:12.892423 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 01:36:12.896514 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 01:36:12.897625 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. May 16 01:36:12.897779 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. May 16 01:36:12.907034 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. May 16 01:36:12.907178 systemd-tmpfiles[1240]: Skipping /boot May 16 01:36:12.912083 systemd-udevd[1241]: Using default interface naming scheme 'v255'. May 16 01:36:12.918695 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. May 16 01:36:12.918785 systemd-tmpfiles[1240]: Skipping /boot May 16 01:36:12.960289 zram_generator::config[1271]: No configuration found. May 16 01:36:13.059294 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1276) May 16 01:36:13.109288 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 01:36:13.130288 kernel: ACPI: button: Power Button [PWRF] May 16 01:36:13.182377 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 01:36:13.199849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 01:36:13.203287 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 16 01:36:13.236542 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 16 01:36:13.236617 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 16 01:36:13.237933 kernel: mousedev: PS/2 mouse device common for all mice May 16 01:36:13.247638 kernel: Console: switching to colour dummy device 80x25 May 16 01:36:13.247693 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 16 01:36:13.247719 kernel: [drm] features: -context_init May 16 01:36:13.249502 kernel: [drm] number of scanouts: 1 May 16 01:36:13.249536 kernel: [drm] number of cap sets: 0 May 16 01:36:13.252291 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 16 01:36:13.262290 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 16 01:36:13.262381 kernel: Console: switching to colour frame buffer device 160x50 May 16 01:36:13.268706 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 16 01:36:13.291693 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 01:36:13.295190 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 01:36:13.295317 systemd[1]: Reloading finished in 426 ms. May 16 01:36:13.309589 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 01:36:13.317685 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 01:36:13.365448 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 01:36:13.371740 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 01:36:13.383774 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 01:36:13.384328 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 01:36:13.391549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 01:36:13.395575 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 01:36:13.400752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 01:36:13.405609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 01:36:13.407834 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 01:36:13.411376 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 01:36:13.416897 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 01:36:13.427657 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 01:36:13.436473 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 01:36:13.445433 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 01:36:13.446894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 01:36:13.448165 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 01:36:13.449332 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 01:36:13.452133 systemd[1]: Finished ensure-sysext.service. May 16 01:36:13.452904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 01:36:13.453516 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 01:36:13.453907 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 01:36:13.454034 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 01:36:13.454366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 01:36:13.454480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 01:36:13.454778 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 01:36:13.454883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 01:36:13.470629 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 01:36:13.473956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 01:36:13.474144 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 01:36:13.484123 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 01:36:13.491548 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 01:36:13.495826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 01:36:13.505652 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 01:36:13.533831 lvm[1381]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 01:36:13.570527 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 01:36:13.572169 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 01:36:13.572871 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 01:36:13.578824 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 01:36:13.622087 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 01:36:13.625036 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 01:36:13.634445 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 01:36:13.638032 augenrules[1410]: No rules May 16 01:36:13.639458 systemd[1]: audit-rules.service: Deactivated successfully. May 16 01:36:13.640044 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 01:36:13.683298 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 01:36:13.686535 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 01:36:13.723042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 01:36:13.729698 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 01:36:13.732256 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 01:36:13.737551 systemd-networkd[1365]: lo: Link UP May 16 01:36:13.737557 systemd-networkd[1365]: lo: Gained carrier May 16 01:36:13.739049 systemd-networkd[1365]: Enumeration completed May 16 01:36:13.739197 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 01:36:13.743554 systemd-resolved[1366]: Positive Trust Anchors: May 16 01:36:13.743571 systemd-resolved[1366]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 01:36:13.743612 systemd-resolved[1366]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 01:36:13.744345 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 01:36:13.744351 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 01:36:13.747695 systemd-networkd[1365]: eth0: Link UP May 16 01:36:13.747775 systemd-networkd[1365]: eth0: Gained carrier May 16 01:36:13.747853 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 01:36:13.749162 systemd-resolved[1366]: Using system hostname 'ci-4152-2-3-n-26e690edb8.novalocal'. May 16 01:36:13.750557 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 01:36:13.751494 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 01:36:13.752148 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 01:36:13.754825 systemd[1]: Reached target network.target - Network. May 16 01:36:13.755355 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 01:36:13.755811 systemd[1]: Reached target sysinit.target - System Initialization. May 16 01:36:13.757257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 01:36:13.757911 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 01:36:13.758415 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 01:36:13.758904 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 01:36:13.758931 systemd[1]: Reached target paths.target - Path Units. May 16 01:36:13.759411 systemd[1]: Reached target time-set.target - System Time Set. May 16 01:36:13.760121 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 01:36:13.760673 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 01:36:13.761115 systemd[1]: Reached target timers.target - Timer Units. May 16 01:36:13.763444 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 01:36:13.765555 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 01:36:13.766820 systemd-networkd[1365]: eth0: DHCPv4 address 172.24.4.31/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 16 01:36:13.774595 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. May 16 01:36:13.776685 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 01:36:13.779052 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 01:36:13.779939 systemd[1]: Reached target sockets.target - Socket Units. May 16 01:36:13.780613 systemd[1]: Reached target basic.target - Basic System. May 16 01:36:13.781174 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 01:36:13.781300 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 01:36:13.782694 systemd[1]: Starting containerd.service - containerd container runtime... May 16 01:36:13.792702 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 16 01:36:13.800430 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 01:36:13.809384 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 01:36:13.814245 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 01:36:13.815830 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 01:36:13.820446 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 01:36:13.825255 jq[1433]: false May 16 01:36:13.826413 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 01:36:13.838569 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 01:36:13.845233 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 01:36:13.857206 extend-filesystems[1434]: Found loop4 May 16 01:36:13.860105 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 01:36:13.877796 extend-filesystems[1434]: Found loop5 May 16 01:36:13.877796 extend-filesystems[1434]: Found loop6 May 16 01:36:13.877796 extend-filesystems[1434]: Found loop7 May 16 01:36:13.877796 extend-filesystems[1434]: Found vda May 16 01:36:13.877796 extend-filesystems[1434]: Found vda1 May 16 01:36:13.877796 extend-filesystems[1434]: Found vda2 May 16 01:36:13.877796 extend-filesystems[1434]: Found vda3 May 16 01:36:13.877796 extend-filesystems[1434]: Found usr May 16 01:36:13.877796 extend-filesystems[1434]: Found vda4 May 16 01:36:13.877796 extend-filesystems[1434]: Found vda6 May 16 01:36:13.877796 extend-filesystems[1434]: Found vda7 May 16 01:36:13.877796 extend-filesystems[1434]: Found vda9 May 16 01:36:13.877796 extend-filesystems[1434]: Checking size of /dev/vda9 May 16 01:36:15.103171 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 16 01:36:15.103214 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 16 01:36:15.103232 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1282) May 16 01:36:13.865041 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 01:36:15.103471 extend-filesystems[1434]: Resized partition /dev/vda9 May 16 01:36:13.890733 dbus-daemon[1430]: [system] SELinux support is enabled May 16 01:36:13.865543 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 01:36:15.115646 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) May 16 01:36:15.115646 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 01:36:15.115646 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 01:36:15.115646 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 16 01:36:13.874481 systemd[1]: Starting update-engine.service - Update Engine... May 16 01:36:15.140550 extend-filesystems[1434]: Resized filesystem in /dev/vda9 May 16 01:36:15.141132 update_engine[1448]: I20250516 01:36:15.098567 1448 main.cc:92] Flatcar Update Engine starting May 16 01:36:15.141132 update_engine[1448]: I20250516 01:36:15.109297 1448 update_check_scheduler.cc:74] Next update check in 7m35s May 16 01:36:13.880435 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 01:36:15.141530 tar[1458]: linux-amd64/helm May 16 01:36:13.887204 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 01:36:15.144342 jq[1451]: true May 16 01:36:13.888324 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 01:36:13.888633 systemd[1]: motdgen.service: Deactivated successfully. May 16 01:36:13.888772 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 01:36:15.145792 jq[1465]: true May 16 01:36:13.892336 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 01:36:13.920675 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 01:36:13.921125 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 01:36:15.028845 systemd-resolved[1366]: Clock change detected. Flushing caches. May 16 01:36:15.029192 systemd-timesyncd[1383]: Contacted time server 12.205.28.193:123 (0.flatcar.pool.ntp.org). May 16 01:36:15.029349 systemd-timesyncd[1383]: Initial clock synchronization to Fri 2025-05-16 01:36:15.028777 UTC. May 16 01:36:15.054418 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 01:36:15.054444 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 01:36:15.056175 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 01:36:15.056199 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 01:36:15.069228 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 01:36:15.073362 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 01:36:15.076409 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 01:36:15.109507 systemd[1]: Started update-engine.service - Update Engine. May 16 01:36:15.116815 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 01:36:15.176978 systemd-logind[1441]: New seat seat0. May 16 01:36:15.182804 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) May 16 01:36:15.182832 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 01:36:15.183033 systemd[1]: Started systemd-logind.service - User Login Management. May 16 01:36:15.286139 bash[1488]: Updated "/home/core/.ssh/authorized_keys" May 16 01:36:15.279023 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 01:36:15.296929 systemd[1]: Starting sshkeys.service... May 16 01:36:15.329796 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 16 01:36:15.343195 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 16 01:36:15.416857 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 01:36:15.556337 containerd[1464]: time="2025-05-16T01:36:15.556214064Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 01:36:15.637451 containerd[1464]: time="2025-05-16T01:36:15.637379240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 01:36:15.645001 containerd[1464]: time="2025-05-16T01:36:15.644954247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 01:36:15.645001 containerd[1464]: time="2025-05-16T01:36:15.644988260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 01:36:15.645001 containerd[1464]: time="2025-05-16T01:36:15.645006966Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 01:36:15.645408 containerd[1464]: time="2025-05-16T01:36:15.645202132Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 01:36:15.645408 containerd[1464]: time="2025-05-16T01:36:15.645233220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 01:36:15.645408 containerd[1464]: time="2025-05-16T01:36:15.645319742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 01:36:15.645408 containerd[1464]: time="2025-05-16T01:36:15.645335973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 01:36:15.645537 containerd[1464]: time="2025-05-16T01:36:15.645507725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 01:36:15.645537 containerd[1464]: time="2025-05-16T01:36:15.645532140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 01:36:15.645655 containerd[1464]: time="2025-05-16T01:36:15.645550124Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 01:36:15.645655 containerd[1464]: time="2025-05-16T01:36:15.645565443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 01:36:15.645715 containerd[1464]: time="2025-05-16T01:36:15.645679437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 01:36:15.646185 containerd[1464]: time="2025-05-16T01:36:15.645895472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 01:36:15.646185 containerd[1464]: time="2025-05-16T01:36:15.646002422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 01:36:15.646185 containerd[1464]: time="2025-05-16T01:36:15.646018453Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 01:36:15.646185 containerd[1464]: time="2025-05-16T01:36:15.646100286Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 01:36:15.646185 containerd[1464]: time="2025-05-16T01:36:15.646150390Z" level=info msg="metadata content store policy set" policy=shared May 16 01:36:15.657123 containerd[1464]: time="2025-05-16T01:36:15.657004647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 01:36:15.657123 containerd[1464]: time="2025-05-16T01:36:15.657062315Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 01:36:15.657123 containerd[1464]: time="2025-05-16T01:36:15.657080980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 01:36:15.657123 containerd[1464]: time="2025-05-16T01:36:15.657100447Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 01:36:15.657267 containerd[1464]: time="2025-05-16T01:36:15.657156312Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 01:36:15.657577 containerd[1464]: time="2025-05-16T01:36:15.657285674Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 01:36:15.658904 containerd[1464]: time="2025-05-16T01:36:15.658878762Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.658980363Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659005750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659021410Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659036328Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659053931Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659068047Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659083406Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659098033Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659113041Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659127999Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659142206Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659165850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659177 containerd[1464]: time="2025-05-16T01:36:15.659182331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659199243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659215103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659230021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659245941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659260498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659274905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659291085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659307927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659321713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659336511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659350707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659367088Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659397054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659417222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 01:36:15.659451 containerd[1464]: time="2025-05-16T01:36:15.659447539Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 01:36:15.661700 containerd[1464]: time="2025-05-16T01:36:15.661674135Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 01:36:15.661740 containerd[1464]: time="2025-05-16T01:36:15.661702288Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 01:36:15.661740 containerd[1464]: time="2025-05-16T01:36:15.661715242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 01:36:15.661740 containerd[1464]: time="2025-05-16T01:36:15.661730861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 01:36:15.661815 containerd[1464]: time="2025-05-16T01:36:15.661742283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 01:36:15.661815 containerd[1464]: time="2025-05-16T01:36:15.661756840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 01:36:15.661815 containerd[1464]: time="2025-05-16T01:36:15.661767810Z" level=info msg="NRI interface is disabled by configuration." May 16 01:36:15.661815 containerd[1464]: time="2025-05-16T01:36:15.661778400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 01:36:15.662180 containerd[1464]: time="2025-05-16T01:36:15.662092399Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 01:36:15.662327 containerd[1464]: time="2025-05-16T01:36:15.662183420Z" level=info msg="Connect containerd service" May 16 01:36:15.662327 containerd[1464]: time="2025-05-16T01:36:15.662211162Z" level=info msg="using legacy CRI server" May 16 01:36:15.662327 containerd[1464]: time="2025-05-16T01:36:15.662218436Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 01:36:15.662401 containerd[1464]: time="2025-05-16T01:36:15.662335225Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 01:36:15.663269 containerd[1464]: time="2025-05-16T01:36:15.662994651Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 01:36:15.663269 containerd[1464]: time="2025-05-16T01:36:15.663119746Z" level=info msg="Start subscribing containerd event" May 16 01:36:15.663269 containerd[1464]: time="2025-05-16T01:36:15.663165642Z" level=info msg="Start recovering state" May 16 01:36:15.663269 containerd[1464]: time="2025-05-16T01:36:15.663217159Z" level=info msg="Start event monitor" May 16 01:36:15.663269 containerd[1464]: time="2025-05-16T01:36:15.663240963Z" level=info msg="Start snapshots syncer" May 16 01:36:15.663269 containerd[1464]: time="2025-05-16T01:36:15.663251363Z" level=info msg="Start cni network conf syncer for default" May 16 01:36:15.663269 containerd[1464]: time="2025-05-16T01:36:15.663261502Z" level=info msg="Start streaming server" May 16 01:36:15.667800 containerd[1464]: time="2025-05-16T01:36:15.665765839Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 01:36:15.667800 containerd[1464]: time="2025-05-16T01:36:15.665831792Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 01:36:15.665976 systemd[1]: Started containerd.service - containerd container runtime. May 16 01:36:15.674387 containerd[1464]: time="2025-05-16T01:36:15.674350038Z" level=info msg="containerd successfully booted in 0.120438s" May 16 01:36:15.760081 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 01:36:15.784232 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 01:36:15.799459 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 01:36:15.814019 tar[1458]: linux-amd64/LICENSE May 16 01:36:15.814157 tar[1458]: linux-amd64/README.md May 16 01:36:15.826279 systemd[1]: issuegen.service: Deactivated successfully. May 16 01:36:15.826767 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 01:36:15.830697 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 01:36:15.838909 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 01:36:15.850610 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 01:36:15.867995 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 01:36:15.871954 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 01:36:15.874038 systemd[1]: Reached target getty.target - Login Prompts. May 16 01:36:16.177778 systemd-networkd[1365]: eth0: Gained IPv6LL May 16 01:36:16.179998 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 01:36:16.187521 systemd[1]: Reached target network-online.target - Network is Online. May 16 01:36:16.230914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:36:16.235048 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 01:36:16.319014 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 01:36:18.084788 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 01:36:18.097974 systemd[1]: Started sshd@0-172.24.4.31:22-172.24.4.1:33688.service - OpenSSH per-connection server daemon (172.24.4.1:33688). May 16 01:36:18.606720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:36:18.628515 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 01:36:19.081713 sshd[1540]: Accepted publickey for core from 172.24.4.1 port 33688 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:36:19.082887 sshd-session[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:36:19.108385 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 01:36:19.123365 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 01:36:19.135637 systemd-logind[1441]: New session 1 of user core. May 16 01:36:19.169141 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 01:36:19.179903 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 01:36:19.192205 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 01:36:19.311470 systemd[1555]: Queued start job for default target default.target. May 16 01:36:19.316759 systemd[1555]: Created slice app.slice - User Application Slice. May 16 01:36:19.316785 systemd[1555]: Reached target paths.target - Paths. May 16 01:36:19.316800 systemd[1555]: Reached target timers.target - Timers. May 16 01:36:19.320693 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 01:36:19.330134 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 01:36:19.330767 systemd[1555]: Reached target sockets.target - Sockets. May 16 01:36:19.330784 systemd[1555]: Reached target basic.target - Basic System. May 16 01:36:19.330826 systemd[1555]: Reached target default.target - Main User Target. May 16 01:36:19.330853 systemd[1555]: Startup finished in 132ms. May 16 01:36:19.331821 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 01:36:19.342837 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 01:36:19.777650 kubelet[1548]: E0516 01:36:19.775375 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 01:36:19.779090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 01:36:19.779359 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 01:36:19.780106 systemd[1]: kubelet.service: Consumed 2.179s CPU time. May 16 01:36:19.835398 systemd[1]: Started sshd@1-172.24.4.31:22-172.24.4.1:33700.service - OpenSSH per-connection server daemon (172.24.4.1:33700). May 16 01:36:20.917884 login[1523]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 01:36:20.923332 login[1524]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 01:36:20.931576 systemd-logind[1441]: New session 3 of user core. May 16 01:36:20.946990 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 01:36:20.953907 systemd-logind[1441]: New session 2 of user core. May 16 01:36:20.960415 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 01:36:21.988399 coreos-metadata[1429]: May 16 01:36:21.988 WARN failed to locate config-drive, using the metadata service API instead May 16 01:36:22.015347 sshd[1568]: Accepted publickey for core from 172.24.4.1 port 33700 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:36:22.019760 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:36:22.030821 systemd-logind[1441]: New session 4 of user core. May 16 01:36:22.039575 coreos-metadata[1429]: May 16 01:36:22.039 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 16 01:36:22.041073 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 01:36:22.325458 coreos-metadata[1429]: May 16 01:36:22.325 INFO Fetch successful May 16 01:36:22.325458 coreos-metadata[1429]: May 16 01:36:22.325 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 16 01:36:22.343204 coreos-metadata[1429]: May 16 01:36:22.343 INFO Fetch successful May 16 01:36:22.343340 coreos-metadata[1429]: May 16 01:36:22.343 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 16 01:36:22.358389 coreos-metadata[1429]: May 16 01:36:22.358 INFO Fetch successful May 16 01:36:22.358549 coreos-metadata[1429]: May 16 01:36:22.358 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 16 01:36:22.370328 coreos-metadata[1429]: May 16 01:36:22.370 INFO Fetch successful May 16 01:36:22.370482 coreos-metadata[1429]: May 16 01:36:22.370 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 16 01:36:22.381255 coreos-metadata[1429]: May 16 01:36:22.381 INFO Fetch successful May 16 01:36:22.381507 coreos-metadata[1429]: May 16 01:36:22.381 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 16 01:36:22.390128 coreos-metadata[1429]: May 16 01:36:22.390 INFO Fetch successful May 16 01:36:22.448107 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 16 01:36:22.450247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 01:36:22.466097 coreos-metadata[1495]: May 16 01:36:22.465 WARN failed to locate config-drive, using the metadata service API instead May 16 01:36:22.508246 coreos-metadata[1495]: May 16 01:36:22.508 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 16 01:36:22.523746 coreos-metadata[1495]: May 16 01:36:22.523 INFO Fetch successful May 16 01:36:22.523832 coreos-metadata[1495]: May 16 01:36:22.523 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 16 01:36:22.537112 coreos-metadata[1495]: May 16 01:36:22.536 INFO Fetch successful May 16 01:36:22.547236 unknown[1495]: wrote ssh authorized keys file for user: core May 16 01:36:22.602274 update-ssh-keys[1609]: Updated "/home/core/.ssh/authorized_keys" May 16 01:36:22.603551 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 16 01:36:22.607304 systemd[1]: Finished sshkeys.service. May 16 01:36:22.612018 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 01:36:22.612269 systemd[1]: Startup finished in 1.200s (kernel) + 15.943s (initrd) + 10.824s (userspace) = 27.968s. May 16 01:36:22.622526 sshd[1599]: Connection closed by 172.24.4.1 port 33700 May 16 01:36:22.621414 sshd-session[1568]: pam_unix(sshd:session): session closed for user core May 16 01:36:22.633909 systemd[1]: sshd@1-172.24.4.31:22-172.24.4.1:33700.service: Deactivated successfully. May 16 01:36:22.637225 systemd[1]: session-4.scope: Deactivated successfully. May 16 01:36:22.640951 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. May 16 01:36:22.653431 systemd[1]: Started sshd@2-172.24.4.31:22-172.24.4.1:33708.service - OpenSSH per-connection server daemon (172.24.4.1:33708). May 16 01:36:22.656111 systemd-logind[1441]: Removed session 4. May 16 01:36:23.830113 sshd[1615]: Accepted publickey for core from 172.24.4.1 port 33708 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:36:23.833344 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:36:23.846953 systemd-logind[1441]: New session 5 of user core. May 16 01:36:23.855993 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 01:36:24.472845 sshd[1617]: Connection closed by 172.24.4.1 port 33708 May 16 01:36:24.471999 sshd-session[1615]: pam_unix(sshd:session): session closed for user core May 16 01:36:24.478011 systemd[1]: sshd@2-172.24.4.31:22-172.24.4.1:33708.service: Deactivated successfully. May 16 01:36:24.481777 systemd[1]: session-5.scope: Deactivated successfully. May 16 01:36:24.485469 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. May 16 01:36:24.488083 systemd-logind[1441]: Removed session 5. May 16 01:36:30.030495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 01:36:30.039974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:36:30.372003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:36:30.385213 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 01:36:30.503789 kubelet[1629]: E0516 01:36:30.503712 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 01:36:30.512280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 01:36:30.512718 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 01:36:34.728174 systemd[1]: Started sshd@3-172.24.4.31:22-172.24.4.1:37010.service - OpenSSH per-connection server daemon (172.24.4.1:37010). May 16 01:36:35.770632 sshd[1637]: Accepted publickey for core from 172.24.4.1 port 37010 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:36:35.773581 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:36:35.784690 systemd-logind[1441]: New session 6 of user core. May 16 01:36:35.791890 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 01:36:36.286122 sshd[1639]: Connection closed by 172.24.4.1 port 37010 May 16 01:36:36.285959 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 16 01:36:36.298271 systemd[1]: sshd@3-172.24.4.31:22-172.24.4.1:37010.service: Deactivated successfully. May 16 01:36:36.301485 systemd[1]: session-6.scope: Deactivated successfully. May 16 01:36:36.305002 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. May 16 01:36:36.313224 systemd[1]: Started sshd@4-172.24.4.31:22-172.24.4.1:37012.service - OpenSSH per-connection server daemon (172.24.4.1:37012). May 16 01:36:36.317221 systemd-logind[1441]: Removed session 6. May 16 01:36:37.395637 sshd[1644]: Accepted publickey for core from 172.24.4.1 port 37012 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:36:37.398737 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:36:37.408497 systemd-logind[1441]: New session 7 of user core. May 16 01:36:37.417914 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 01:36:37.982638 sshd[1646]: Connection closed by 172.24.4.1 port 37012 May 16 01:36:37.981741 sshd-session[1644]: pam_unix(sshd:session): session closed for user core May 16 01:36:37.999422 systemd[1]: sshd@4-172.24.4.31:22-172.24.4.1:37012.service: Deactivated successfully. May 16 01:36:38.002645 systemd[1]: session-7.scope: Deactivated successfully. May 16 01:36:38.005921 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. May 16 01:36:38.015154 systemd[1]: Started sshd@5-172.24.4.31:22-172.24.4.1:37018.service - OpenSSH per-connection server daemon (172.24.4.1:37018). May 16 01:36:38.019070 systemd-logind[1441]: Removed session 7. May 16 01:36:38.963168 sshd[1651]: Accepted publickey for core from 172.24.4.1 port 37018 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:36:38.966540 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:36:38.976152 systemd-logind[1441]: New session 8 of user core. May 16 01:36:38.985872 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 01:36:39.543180 sshd[1653]: Connection closed by 172.24.4.1 port 37018 May 16 01:36:39.544276 sshd-session[1651]: pam_unix(sshd:session): session closed for user core May 16 01:36:39.557229 systemd[1]: sshd@5-172.24.4.31:22-172.24.4.1:37018.service: Deactivated successfully. May 16 01:36:39.560418 systemd[1]: session-8.scope: Deactivated successfully. May 16 01:36:39.563988 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. May 16 01:36:39.570110 systemd[1]: Started sshd@6-172.24.4.31:22-172.24.4.1:37032.service - OpenSSH per-connection server daemon (172.24.4.1:37032). May 16 01:36:39.572897 systemd-logind[1441]: Removed session 8. May 16 01:36:40.668850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 01:36:40.677968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:36:40.913906 sshd[1658]: Accepted publickey for core from 172.24.4.1 port 37032 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:36:40.916668 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:36:40.928997 systemd-logind[1441]: New session 9 of user core. May 16 01:36:40.937893 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 01:36:41.034767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:36:41.050470 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 01:36:41.141816 kubelet[1669]: E0516 01:36:41.141772 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 01:36:41.145216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 01:36:41.145561 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 01:36:41.355087 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 01:36:41.356119 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 01:36:41.375107 sudo[1677]: pam_unix(sudo:session): session closed for user root May 16 01:36:41.549640 sshd[1663]: Connection closed by 172.24.4.1 port 37032 May 16 01:36:41.551243 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 16 01:36:41.564183 systemd[1]: sshd@6-172.24.4.31:22-172.24.4.1:37032.service: Deactivated successfully. May 16 01:36:41.567161 systemd[1]: session-9.scope: Deactivated successfully. May 16 01:36:41.571005 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. May 16 01:36:41.577267 systemd[1]: Started sshd@7-172.24.4.31:22-172.24.4.1:37034.service - OpenSSH per-connection server daemon (172.24.4.1:37034). May 16 01:36:41.579725 systemd-logind[1441]: Removed session 9. May 16 01:36:43.065733 sshd[1682]: Accepted publickey for core from 172.24.4.1 port 37034 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:36:43.068630 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:36:43.079140 systemd-logind[1441]: New session 10 of user core. May 16 01:36:43.089935 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 01:36:43.531386 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 01:36:43.532167 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 01:36:43.539469 sudo[1686]: pam_unix(sudo:session): session closed for user root May 16 01:36:43.550949 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 01:36:43.552177 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 01:36:43.589674 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 01:36:43.648170 augenrules[1708]: No rules May 16 01:36:43.650692 systemd[1]: audit-rules.service: Deactivated successfully. May 16 01:36:43.651129 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 01:36:43.654177 sudo[1685]: pam_unix(sudo:session): session closed for user root May 16 01:36:43.837527 sshd[1684]: Connection closed by 172.24.4.1 port 37034 May 16 01:36:43.838394 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 16 01:36:43.850234 systemd[1]: sshd@7-172.24.4.31:22-172.24.4.1:37034.service: Deactivated successfully. May 16 01:36:43.853705 systemd[1]: session-10.scope: Deactivated successfully. May 16 01:36:43.857168 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. May 16 01:36:43.863162 systemd[1]: Started sshd@8-172.24.4.31:22-172.24.4.1:43340.service - OpenSSH per-connection server daemon (172.24.4.1:43340). May 16 01:36:43.866433 systemd-logind[1441]: Removed session 10. May 16 01:36:45.278645 sshd[1716]: Accepted publickey for core from 172.24.4.1 port 43340 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:36:45.282710 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:36:45.292320 systemd-logind[1441]: New session 11 of user core. May 16 01:36:45.301895 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 01:36:45.719215 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 01:36:45.720205 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 01:36:46.399830 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 01:36:46.416356 (dockerd)[1738]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 01:36:47.022333 dockerd[1738]: time="2025-05-16T01:36:47.021822054Z" level=info msg="Starting up" May 16 01:36:47.176041 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2628249200-merged.mount: Deactivated successfully. May 16 01:36:47.203778 systemd[1]: var-lib-docker-metacopy\x2dcheck1239956752-merged.mount: Deactivated successfully. May 16 01:36:47.237299 dockerd[1738]: time="2025-05-16T01:36:47.237199995Z" level=info msg="Loading containers: start." May 16 01:36:47.458631 kernel: Initializing XFRM netlink socket May 16 01:36:47.565504 systemd-networkd[1365]: docker0: Link UP May 16 01:36:47.591639 dockerd[1738]: time="2025-05-16T01:36:47.591602167Z" level=info msg="Loading containers: done." May 16 01:36:47.611699 dockerd[1738]: time="2025-05-16T01:36:47.611248343Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 01:36:47.611699 dockerd[1738]: time="2025-05-16T01:36:47.611359361Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 16 01:36:47.611699 dockerd[1738]: time="2025-05-16T01:36:47.611458086Z" level=info msg="Daemon has completed initialization" May 16 01:36:47.660273 dockerd[1738]: time="2025-05-16T01:36:47.659161033Z" level=info msg="API listen on /run/docker.sock" May 16 01:36:47.659333 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 01:36:49.274818 containerd[1464]: time="2025-05-16T01:36:49.274009136Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 01:36:50.059782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744714818.mount: Deactivated successfully. May 16 01:36:51.395578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 16 01:36:51.403199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:36:51.521750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:36:51.525170 (kubelet)[1986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 01:36:51.693538 kubelet[1986]: E0516 01:36:51.693389 1986 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 01:36:51.696285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 01:36:51.696441 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 01:36:51.865727 containerd[1464]: time="2025-05-16T01:36:51.865643628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:51.867271 containerd[1464]: time="2025-05-16T01:36:51.867023357Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078853" May 16 01:36:51.868367 containerd[1464]: time="2025-05-16T01:36:51.868337384Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:51.872211 containerd[1464]: time="2025-05-16T01:36:51.872153772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:51.873379 containerd[1464]: time="2025-05-16T01:36:51.873339620Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 2.599222332s" May 16 01:36:51.873435 containerd[1464]: time="2025-05-16T01:36:51.873381999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 16 01:36:51.874184 containerd[1464]: time="2025-05-16T01:36:51.874032601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 01:36:53.761211 containerd[1464]: time="2025-05-16T01:36:53.760428992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:53.767446 containerd[1464]: time="2025-05-16T01:36:53.767401728Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713530" May 16 01:36:53.767556 containerd[1464]: time="2025-05-16T01:36:53.767405816Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:53.771507 containerd[1464]: time="2025-05-16T01:36:53.771458740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:53.772794 containerd[1464]: time="2025-05-16T01:36:53.772761689Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.898699533s" May 16 01:36:53.772852 containerd[1464]: time="2025-05-16T01:36:53.772794680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 16 01:36:53.773430 containerd[1464]: time="2025-05-16T01:36:53.773402142Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 01:36:55.537109 containerd[1464]: time="2025-05-16T01:36:55.537001424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:55.539427 containerd[1464]: time="2025-05-16T01:36:55.539234679Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784319" May 16 01:36:55.540656 containerd[1464]: time="2025-05-16T01:36:55.540631846Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:55.544021 containerd[1464]: time="2025-05-16T01:36:55.543974641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:55.545545 containerd[1464]: time="2025-05-16T01:36:55.545076486Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.771644559s" May 16 01:36:55.545545 containerd[1464]: time="2025-05-16T01:36:55.545131709Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 16 01:36:55.546007 containerd[1464]: time="2025-05-16T01:36:55.545791229Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 01:36:57.674368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793255745.mount: Deactivated successfully. May 16 01:36:58.240947 containerd[1464]: time="2025-05-16T01:36:58.240905207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:58.242772 containerd[1464]: time="2025-05-16T01:36:58.242736537Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355631" May 16 01:36:58.244119 containerd[1464]: time="2025-05-16T01:36:58.244077760Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:58.246486 containerd[1464]: time="2025-05-16T01:36:58.246447055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:36:58.247240 containerd[1464]: time="2025-05-16T01:36:58.247102869Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 2.701282465s" May 16 01:36:58.247240 containerd[1464]: time="2025-05-16T01:36:58.247143916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 16 01:36:58.247775 containerd[1464]: time="2025-05-16T01:36:58.247744257Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 01:36:58.951784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789830773.mount: Deactivated successfully. May 16 01:37:00.648570 update_engine[1448]: I20250516 01:37:00.648505 1448 update_attempter.cc:509] Updating boot flags... May 16 01:37:00.689860 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2068) May 16 01:37:00.757090 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2068) May 16 01:37:00.803639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2068) May 16 01:37:00.946534 containerd[1464]: time="2025-05-16T01:37:00.946330378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:00.948382 containerd[1464]: time="2025-05-16T01:37:00.948324303Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 16 01:37:00.949912 containerd[1464]: time="2025-05-16T01:37:00.949866254Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:00.953743 containerd[1464]: time="2025-05-16T01:37:00.953721165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:00.955881 containerd[1464]: time="2025-05-16T01:37:00.955836407Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.708060241s" May 16 01:37:00.955939 containerd[1464]: time="2025-05-16T01:37:00.955877894Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 01:37:00.956606 containerd[1464]: time="2025-05-16T01:37:00.956404136Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 01:37:01.509185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631077873.mount: Deactivated successfully. May 16 01:37:01.518645 containerd[1464]: time="2025-05-16T01:37:01.517796882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:01.520972 containerd[1464]: time="2025-05-16T01:37:01.520882889Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 16 01:37:01.522329 containerd[1464]: time="2025-05-16T01:37:01.522274559Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:01.529738 containerd[1464]: time="2025-05-16T01:37:01.529656012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:01.531974 containerd[1464]: time="2025-05-16T01:37:01.531916045Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 575.474048ms" May 16 01:37:01.532181 containerd[1464]: time="2025-05-16T01:37:01.532142127Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 01:37:01.533514 containerd[1464]: time="2025-05-16T01:37:01.532992496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 01:37:01.836448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 16 01:37:01.844046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:37:02.045974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:37:02.049230 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 01:37:02.481008 kubelet[2088]: E0516 01:37:02.480931 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 01:37:02.483940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 01:37:02.484093 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 01:37:02.977102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4128367141.mount: Deactivated successfully. May 16 01:37:05.882327 containerd[1464]: time="2025-05-16T01:37:05.882288484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:05.884883 containerd[1464]: time="2025-05-16T01:37:05.884809278Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" May 16 01:37:05.886511 containerd[1464]: time="2025-05-16T01:37:05.886472859Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:05.890479 containerd[1464]: time="2025-05-16T01:37:05.890445979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:05.891990 containerd[1464]: time="2025-05-16T01:37:05.891955252Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.358880892s" May 16 01:37:05.892040 containerd[1464]: time="2025-05-16T01:37:05.891990278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 16 01:37:09.195072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:37:09.203107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:37:09.231959 systemd[1]: Reloading requested from client PID 2177 ('systemctl') (unit session-11.scope)... May 16 01:37:09.231975 systemd[1]: Reloading... May 16 01:37:09.313605 zram_generator::config[2212]: No configuration found. May 16 01:37:09.462104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 01:37:09.548403 systemd[1]: Reloading finished in 316 ms. May 16 01:37:09.598976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:37:09.609832 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 01:37:09.611739 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:37:09.612740 systemd[1]: kubelet.service: Deactivated successfully. May 16 01:37:09.613266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:37:09.625389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:37:09.808932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:37:09.826431 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 01:37:09.913449 kubelet[2286]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 01:37:09.913786 kubelet[2286]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 01:37:09.913835 kubelet[2286]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 01:37:09.913965 kubelet[2286]: I0516 01:37:09.913936 2286 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 01:37:10.715375 kubelet[2286]: I0516 01:37:10.715346 2286 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 01:37:10.715484 kubelet[2286]: I0516 01:37:10.715475 2286 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 01:37:10.715817 kubelet[2286]: I0516 01:37:10.715803 2286 server.go:934] "Client rotation is on, will bootstrap in background" May 16 01:37:10.751733 kubelet[2286]: E0516 01:37:10.751699 2286 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:10.752214 kubelet[2286]: I0516 01:37:10.752198 2286 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 01:37:10.761825 kubelet[2286]: E0516 01:37:10.761768 2286 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 01:37:10.761996 kubelet[2286]: I0516 01:37:10.761932 2286 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 01:37:10.766396 kubelet[2286]: I0516 01:37:10.766380 2286 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 01:37:10.767855 kubelet[2286]: I0516 01:37:10.767754 2286 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 01:37:10.768046 kubelet[2286]: I0516 01:37:10.767876 2286 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 01:37:10.768109 kubelet[2286]: I0516 01:37:10.767900 2286 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-n-26e690edb8.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 01:37:10.768109 kubelet[2286]: I0516 01:37:10.768077 2286 topology_manager.go:138] "Creating topology manager with none policy" May 16 01:37:10.768109 kubelet[2286]: I0516 01:37:10.768086 2286 container_manager_linux.go:300] "Creating device plugin manager" May 16 01:37:10.768375 kubelet[2286]: I0516 01:37:10.768171 2286 state_mem.go:36] "Initialized new in-memory state store" May 16 01:37:10.772353 kubelet[2286]: I0516 01:37:10.772127 2286 kubelet.go:408] "Attempting to sync node with API server" May 16 01:37:10.772353 kubelet[2286]: I0516 01:37:10.772151 2286 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 01:37:10.772353 kubelet[2286]: I0516 01:37:10.772179 2286 kubelet.go:314] "Adding apiserver pod source" May 16 01:37:10.772353 kubelet[2286]: I0516 01:37:10.772196 2286 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 01:37:10.781121 kubelet[2286]: W0516 01:37:10.780571 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-26e690edb8.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused May 16 01:37:10.781121 kubelet[2286]: I0516 01:37:10.780706 2286 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 01:37:10.781121 kubelet[2286]: E0516 01:37:10.780708 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-26e690edb8.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:10.781121 kubelet[2286]: W0516 01:37:10.780847 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused May 16 01:37:10.781121 kubelet[2286]: E0516 01:37:10.780916 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:10.781121 kubelet[2286]: I0516 01:37:10.781138 2286 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 01:37:10.782249 kubelet[2286]: W0516 01:37:10.781184 2286 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 01:37:10.784766 kubelet[2286]: I0516 01:37:10.784021 2286 server.go:1274] "Started kubelet" May 16 01:37:10.786098 kubelet[2286]: I0516 01:37:10.786046 2286 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 01:37:10.789220 kubelet[2286]: I0516 01:37:10.789187 2286 server.go:449] "Adding debug handlers to kubelet server" May 16 01:37:10.795545 kubelet[2286]: I0516 01:37:10.795493 2286 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 01:37:10.795779 kubelet[2286]: I0516 01:37:10.795754 2286 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 01:37:10.797790 kubelet[2286]: E0516 01:37:10.795916 2286 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.31:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-3-n-26e690edb8.novalocal.183fde24f6cefff8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-n-26e690edb8.novalocal,UID:ci-4152-2-3-n-26e690edb8.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-n-26e690edb8.novalocal,},FirstTimestamp:2025-05-16 01:37:10.7839918 +0000 UTC m=+0.950661200,LastTimestamp:2025-05-16 01:37:10.7839918 +0000 UTC m=+0.950661200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-n-26e690edb8.novalocal,}" May 16 01:37:10.800292 kubelet[2286]: I0516 01:37:10.800237 2286 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 01:37:10.803449 kubelet[2286]: E0516 01:37:10.802293 2286 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 01:37:10.803449 kubelet[2286]: I0516 01:37:10.802493 2286 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 01:37:10.806075 kubelet[2286]: E0516 01:37:10.806029 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-26e690edb8.novalocal\" not found" May 16 01:37:10.806206 kubelet[2286]: I0516 01:37:10.806102 2286 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 01:37:10.806466 kubelet[2286]: I0516 01:37:10.806436 2286 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 01:37:10.806556 kubelet[2286]: I0516 01:37:10.806488 2286 reconciler.go:26] "Reconciler: start to sync state" May 16 01:37:10.807523 kubelet[2286]: W0516 01:37:10.807470 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused May 16 01:37:10.807523 kubelet[2286]: E0516 01:37:10.807516 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:10.808924 kubelet[2286]: E0516 01:37:10.808876 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-26e690edb8.novalocal?timeout=10s\": dial tcp 172.24.4.31:6443: connect: connection refused" interval="200ms" May 16 01:37:10.809272 kubelet[2286]: I0516 01:37:10.809239 2286 factory.go:221] Registration of the containerd container factory successfully May 16 01:37:10.809272 kubelet[2286]: I0516 01:37:10.809256 2286 factory.go:221] Registration of the systemd container factory successfully May 16 01:37:10.809404 kubelet[2286]: I0516 01:37:10.809307 2286 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 01:37:10.829455 kubelet[2286]: I0516 01:37:10.829393 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 01:37:10.831976 kubelet[2286]: I0516 01:37:10.831921 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 01:37:10.832161 kubelet[2286]: I0516 01:37:10.832141 2286 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 01:37:10.832346 kubelet[2286]: I0516 01:37:10.832325 2286 kubelet.go:2321] "Starting kubelet main sync loop" May 16 01:37:10.832807 kubelet[2286]: E0516 01:37:10.832636 2286 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 01:37:10.837192 kubelet[2286]: W0516 01:37:10.837116 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused May 16 01:37:10.837192 kubelet[2286]: E0516 01:37:10.837174 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:10.844005 kubelet[2286]: I0516 01:37:10.843937 2286 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 01:37:10.844005 kubelet[2286]: I0516 01:37:10.843991 2286 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 01:37:10.844005 kubelet[2286]: I0516 01:37:10.844006 2286 state_mem.go:36] "Initialized new in-memory state store" May 16 01:37:10.849277 kubelet[2286]: I0516 01:37:10.849242 2286 policy_none.go:49] "None policy: Start" May 16 01:37:10.849971 kubelet[2286]: I0516 01:37:10.849944 2286 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 01:37:10.850549 kubelet[2286]: I0516 01:37:10.850185 2286 state_mem.go:35] "Initializing new in-memory state store" May 16 01:37:10.866260 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 01:37:10.881580 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 01:37:10.886284 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 01:37:10.894215 kubelet[2286]: I0516 01:37:10.893549 2286 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 01:37:10.894215 kubelet[2286]: I0516 01:37:10.893726 2286 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 01:37:10.894215 kubelet[2286]: I0516 01:37:10.893736 2286 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 01:37:10.894215 kubelet[2286]: I0516 01:37:10.893936 2286 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 01:37:10.897333 kubelet[2286]: E0516 01:37:10.897298 2286 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-3-n-26e690edb8.novalocal\" not found" May 16 01:37:10.955108 systemd[1]: Created slice kubepods-burstable-pod8c217de342f43b5c9878b29ef979d750.slice - libcontainer container kubepods-burstable-pod8c217de342f43b5c9878b29ef979d750.slice. May 16 01:37:10.987557 systemd[1]: Created slice kubepods-burstable-pod351b880098763969412eb33474a71a92.slice - libcontainer container kubepods-burstable-pod351b880098763969412eb33474a71a92.slice. May 16 01:37:10.999048 kubelet[2286]: I0516 01:37:10.997756 2286 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:10.999048 kubelet[2286]: E0516 01:37:10.998391 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.31:6443/api/v1/nodes\": dial tcp 172.24.4.31:6443: connect: connection refused" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.008110 systemd[1]: Created slice kubepods-burstable-pod82c7c1119c6c8aebee23efc2a2f0ae47.slice - libcontainer container kubepods-burstable-pod82c7c1119c6c8aebee23efc2a2f0ae47.slice. May 16 01:37:11.010389 kubelet[2286]: E0516 01:37:11.009741 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-26e690edb8.novalocal?timeout=10s\": dial tcp 172.24.4.31:6443: connect: connection refused" interval="400ms" May 16 01:37:11.108460 kubelet[2286]: I0516 01:37:11.108336 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c217de342f43b5c9878b29ef979d750-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"8c217de342f43b5c9878b29ef979d750\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.108460 kubelet[2286]: I0516 01:37:11.108427 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c217de342f43b5c9878b29ef979d750-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"8c217de342f43b5c9878b29ef979d750\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.108775 kubelet[2286]: I0516 01:37:11.108488 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.108775 kubelet[2286]: I0516 01:37:11.108537 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.108775 kubelet[2286]: I0516 01:37:11.108632 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.108775 kubelet[2286]: I0516 01:37:11.108681 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c217de342f43b5c9878b29ef979d750-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"8c217de342f43b5c9878b29ef979d750\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.109108 kubelet[2286]: I0516 01:37:11.108721 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.109108 kubelet[2286]: I0516 01:37:11.108761 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.109108 kubelet[2286]: I0516 01:37:11.108807 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82c7c1119c6c8aebee23efc2a2f0ae47-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"82c7c1119c6c8aebee23efc2a2f0ae47\") " pod="kube-system/kube-scheduler-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.202570 kubelet[2286]: I0516 01:37:11.202383 2286 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.203214 kubelet[2286]: E0516 01:37:11.203076 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.31:6443/api/v1/nodes\": dial tcp 172.24.4.31:6443: connect: connection refused" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.281840 containerd[1464]: time="2025-05-16T01:37:11.281557181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal,Uid:8c217de342f43b5c9878b29ef979d750,Namespace:kube-system,Attempt:0,}" May 16 01:37:11.301897 containerd[1464]: time="2025-05-16T01:37:11.301843394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal,Uid:351b880098763969412eb33474a71a92,Namespace:kube-system,Attempt:0,}" May 16 01:37:11.315502 containerd[1464]: time="2025-05-16T01:37:11.315329828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-n-26e690edb8.novalocal,Uid:82c7c1119c6c8aebee23efc2a2f0ae47,Namespace:kube-system,Attempt:0,}" May 16 01:37:11.410990 kubelet[2286]: E0516 01:37:11.410908 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-26e690edb8.novalocal?timeout=10s\": dial tcp 172.24.4.31:6443: connect: connection refused" interval="800ms" May 16 01:37:11.608247 kubelet[2286]: I0516 01:37:11.607649 2286 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.608247 kubelet[2286]: E0516 01:37:11.608193 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.31:6443/api/v1/nodes\": dial tcp 172.24.4.31:6443: connect: connection refused" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:11.687846 kubelet[2286]: W0516 01:37:11.687720 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-26e690edb8.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused May 16 01:37:11.688034 kubelet[2286]: E0516 01:37:11.687853 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-26e690edb8.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:11.701838 kubelet[2286]: W0516 01:37:11.701779 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused May 16 01:37:11.701969 kubelet[2286]: E0516 01:37:11.701842 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:11.717366 kubelet[2286]: E0516 01:37:11.717113 2286 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.31:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-3-n-26e690edb8.novalocal.183fde24f6cefff8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-n-26e690edb8.novalocal,UID:ci-4152-2-3-n-26e690edb8.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-n-26e690edb8.novalocal,},FirstTimestamp:2025-05-16 01:37:10.7839918 +0000 UTC m=+0.950661200,LastTimestamp:2025-05-16 01:37:10.7839918 +0000 UTC m=+0.950661200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-n-26e690edb8.novalocal,}" May 16 01:37:11.931636 kubelet[2286]: W0516 01:37:11.931255 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused May 16 01:37:11.931636 kubelet[2286]: E0516 01:37:11.931405 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:12.010246 kubelet[2286]: W0516 01:37:12.010052 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.31:6443: connect: connection refused May 16 01:37:12.010246 kubelet[2286]: E0516 01:37:12.010182 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:12.212673 kubelet[2286]: E0516 01:37:12.212436 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-26e690edb8.novalocal?timeout=10s\": dial tcp 172.24.4.31:6443: connect: connection refused" interval="1.6s" May 16 01:37:12.406187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022038416.mount: Deactivated successfully. May 16 01:37:12.412032 kubelet[2286]: I0516 01:37:12.411702 2286 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:12.413258 kubelet[2286]: E0516 01:37:12.413165 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.31:6443/api/v1/nodes\": dial tcp 172.24.4.31:6443: connect: connection refused" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:12.422973 containerd[1464]: time="2025-05-16T01:37:12.422877049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 01:37:12.426688 containerd[1464]: time="2025-05-16T01:37:12.426559803Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 01:37:12.430029 containerd[1464]: time="2025-05-16T01:37:12.429927969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 16 01:37:12.431302 containerd[1464]: time="2025-05-16T01:37:12.431237160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 01:37:12.435239 containerd[1464]: time="2025-05-16T01:37:12.435123746Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 01:37:12.438334 containerd[1464]: time="2025-05-16T01:37:12.438061636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 01:37:12.438559 containerd[1464]: time="2025-05-16T01:37:12.438489416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 01:37:12.443802 containerd[1464]: time="2025-05-16T01:37:12.443713357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 01:37:12.448290 containerd[1464]: time="2025-05-16T01:37:12.447389158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.145391815s" May 16 01:37:12.452492 containerd[1464]: time="2025-05-16T01:37:12.452404768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.170620001s" May 16 01:37:12.454196 containerd[1464]: time="2025-05-16T01:37:12.454138453Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.138662411s" May 16 01:37:12.650246 containerd[1464]: time="2025-05-16T01:37:12.648204847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 01:37:12.650246 containerd[1464]: time="2025-05-16T01:37:12.648286840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 01:37:12.650246 containerd[1464]: time="2025-05-16T01:37:12.648302088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:12.650246 containerd[1464]: time="2025-05-16T01:37:12.649536920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:12.659566 containerd[1464]: time="2025-05-16T01:37:12.659402239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 01:37:12.661810 containerd[1464]: time="2025-05-16T01:37:12.661733072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 01:37:12.662572 containerd[1464]: time="2025-05-16T01:37:12.662503726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:12.666073 containerd[1464]: time="2025-05-16T01:37:12.665926012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:12.666211 containerd[1464]: time="2025-05-16T01:37:12.665916635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 01:37:12.666334 containerd[1464]: time="2025-05-16T01:37:12.666225623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 01:37:12.666334 containerd[1464]: time="2025-05-16T01:37:12.666251421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:12.666678 containerd[1464]: time="2025-05-16T01:37:12.666438251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:12.694787 systemd[1]: Started cri-containerd-b7bdbbb3dbe7099dfd9eb09f701ecfb02c683383f1bfea4d6a7fc08f4eecbdc9.scope - libcontainer container b7bdbbb3dbe7099dfd9eb09f701ecfb02c683383f1bfea4d6a7fc08f4eecbdc9. May 16 01:37:12.709810 systemd[1]: Started cri-containerd-07911ffab215b2257837fa6749513428b1afba33a7d054089510ba7203007763.scope - libcontainer container 07911ffab215b2257837fa6749513428b1afba33a7d054089510ba7203007763. May 16 01:37:12.711717 systemd[1]: Started cri-containerd-25d884ece3c27217b1ce2a508af50d13099794cc167a591bac8ae4a0f1ba054e.scope - libcontainer container 25d884ece3c27217b1ce2a508af50d13099794cc167a591bac8ae4a0f1ba054e. May 16 01:37:12.767707 containerd[1464]: time="2025-05-16T01:37:12.767666922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal,Uid:351b880098763969412eb33474a71a92,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7bdbbb3dbe7099dfd9eb09f701ecfb02c683383f1bfea4d6a7fc08f4eecbdc9\"" May 16 01:37:12.775128 containerd[1464]: time="2025-05-16T01:37:12.775087685Z" level=info msg="CreateContainer within sandbox \"b7bdbbb3dbe7099dfd9eb09f701ecfb02c683383f1bfea4d6a7fc08f4eecbdc9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 01:37:12.785832 containerd[1464]: time="2025-05-16T01:37:12.785780393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-n-26e690edb8.novalocal,Uid:82c7c1119c6c8aebee23efc2a2f0ae47,Namespace:kube-system,Attempt:0,} returns sandbox id \"25d884ece3c27217b1ce2a508af50d13099794cc167a591bac8ae4a0f1ba054e\"" May 16 01:37:12.789296 containerd[1464]: time="2025-05-16T01:37:12.789266208Z" level=info msg="CreateContainer within sandbox \"25d884ece3c27217b1ce2a508af50d13099794cc167a591bac8ae4a0f1ba054e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 01:37:12.792825 containerd[1464]: time="2025-05-16T01:37:12.792454808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal,Uid:8c217de342f43b5c9878b29ef979d750,Namespace:kube-system,Attempt:0,} returns sandbox id \"07911ffab215b2257837fa6749513428b1afba33a7d054089510ba7203007763\"" May 16 01:37:12.799146 containerd[1464]: time="2025-05-16T01:37:12.799106670Z" level=info msg="CreateContainer within sandbox \"07911ffab215b2257837fa6749513428b1afba33a7d054089510ba7203007763\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 01:37:12.807612 containerd[1464]: time="2025-05-16T01:37:12.807387884Z" level=info msg="CreateContainer within sandbox \"b7bdbbb3dbe7099dfd9eb09f701ecfb02c683383f1bfea4d6a7fc08f4eecbdc9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23f71c54b35a04436a0f7bbbc57357e429535d7c0d1621dc2f7e5bc5f50f2051\"" May 16 01:37:12.808103 containerd[1464]: time="2025-05-16T01:37:12.808062317Z" level=info msg="StartContainer for \"23f71c54b35a04436a0f7bbbc57357e429535d7c0d1621dc2f7e5bc5f50f2051\"" May 16 01:37:12.835066 containerd[1464]: time="2025-05-16T01:37:12.834507565Z" level=info msg="CreateContainer within sandbox \"07911ffab215b2257837fa6749513428b1afba33a7d054089510ba7203007763\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f22bfd931ccb8ef1dcebca7c816e4fd9248fe028e22092bdfd9772f1d971f8b\"" May 16 01:37:12.835427 containerd[1464]: time="2025-05-16T01:37:12.834825731Z" level=info msg="CreateContainer within sandbox \"25d884ece3c27217b1ce2a508af50d13099794cc167a591bac8ae4a0f1ba054e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c70d5d75af9b677e3bac8470da5bd446e8233767848fd6e40d582ab8fe84feed\"" May 16 01:37:12.835889 containerd[1464]: time="2025-05-16T01:37:12.835758166Z" level=info msg="StartContainer for \"4f22bfd931ccb8ef1dcebca7c816e4fd9248fe028e22092bdfd9772f1d971f8b\"" May 16 01:37:12.835780 systemd[1]: Started cri-containerd-23f71c54b35a04436a0f7bbbc57357e429535d7c0d1621dc2f7e5bc5f50f2051.scope - libcontainer container 23f71c54b35a04436a0f7bbbc57357e429535d7c0d1621dc2f7e5bc5f50f2051. May 16 01:37:12.836954 containerd[1464]: time="2025-05-16T01:37:12.836798754Z" level=info msg="StartContainer for \"c70d5d75af9b677e3bac8470da5bd446e8233767848fd6e40d582ab8fe84feed\"" May 16 01:37:12.858562 kubelet[2286]: E0516 01:37:12.858488 2286 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.31:6443: connect: connection refused" logger="UnhandledError" May 16 01:37:12.887893 systemd[1]: Started cri-containerd-c70d5d75af9b677e3bac8470da5bd446e8233767848fd6e40d582ab8fe84feed.scope - libcontainer container c70d5d75af9b677e3bac8470da5bd446e8233767848fd6e40d582ab8fe84feed. May 16 01:37:12.896960 systemd[1]: Started cri-containerd-4f22bfd931ccb8ef1dcebca7c816e4fd9248fe028e22092bdfd9772f1d971f8b.scope - libcontainer container 4f22bfd931ccb8ef1dcebca7c816e4fd9248fe028e22092bdfd9772f1d971f8b. May 16 01:37:12.910906 containerd[1464]: time="2025-05-16T01:37:12.910759366Z" level=info msg="StartContainer for \"23f71c54b35a04436a0f7bbbc57357e429535d7c0d1621dc2f7e5bc5f50f2051\" returns successfully" May 16 01:37:12.976487 containerd[1464]: time="2025-05-16T01:37:12.976419609Z" level=info msg="StartContainer for \"c70d5d75af9b677e3bac8470da5bd446e8233767848fd6e40d582ab8fe84feed\" returns successfully" May 16 01:37:12.977196 containerd[1464]: time="2025-05-16T01:37:12.976472117Z" level=info msg="StartContainer for \"4f22bfd931ccb8ef1dcebca7c816e4fd9248fe028e22092bdfd9772f1d971f8b\" returns successfully" May 16 01:37:14.017415 kubelet[2286]: I0516 01:37:14.016921 2286 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:14.838132 kubelet[2286]: E0516 01:37:14.837965 2286 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-3-n-26e690edb8.novalocal\" not found" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:15.082954 kubelet[2286]: I0516 01:37:15.082405 2286 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:15.082954 kubelet[2286]: E0516 01:37:15.082467 2286 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4152-2-3-n-26e690edb8.novalocal\": node \"ci-4152-2-3-n-26e690edb8.novalocal\" not found" May 16 01:37:15.814766 kubelet[2286]: I0516 01:37:15.814688 2286 apiserver.go:52] "Watching apiserver" May 16 01:37:15.907761 kubelet[2286]: I0516 01:37:15.907675 2286 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 01:37:16.008113 kubelet[2286]: W0516 01:37:16.008058 2286 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 01:37:16.011565 kubelet[2286]: W0516 01:37:16.011496 2286 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 01:37:18.152398 systemd[1]: Reloading requested from client PID 2562 ('systemctl') (unit session-11.scope)... May 16 01:37:18.152440 systemd[1]: Reloading... May 16 01:37:18.259196 zram_generator::config[2599]: No configuration found. May 16 01:37:18.410095 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 01:37:18.514835 systemd[1]: Reloading finished in 360 ms. May 16 01:37:18.554879 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:37:18.572652 systemd[1]: kubelet.service: Deactivated successfully. May 16 01:37:18.572843 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:37:18.572894 systemd[1]: kubelet.service: Consumed 1.525s CPU time, 129.1M memory peak, 0B memory swap peak. May 16 01:37:18.579918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 01:37:18.830768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 01:37:18.836699 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 01:37:18.914909 kubelet[2664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 01:37:18.916615 kubelet[2664]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 01:37:18.916615 kubelet[2664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 01:37:18.916615 kubelet[2664]: I0516 01:37:18.915273 2664 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 01:37:18.921838 kubelet[2664]: I0516 01:37:18.921802 2664 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 01:37:18.921838 kubelet[2664]: I0516 01:37:18.921831 2664 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 01:37:18.922112 kubelet[2664]: I0516 01:37:18.922088 2664 server.go:934] "Client rotation is on, will bootstrap in background" May 16 01:37:18.926920 kubelet[2664]: I0516 01:37:18.926233 2664 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 01:37:18.987375 kubelet[2664]: I0516 01:37:18.987215 2664 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 01:37:19.001065 kubelet[2664]: E0516 01:37:19.001003 2664 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 01:37:19.001188 kubelet[2664]: I0516 01:37:19.001068 2664 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 01:37:19.010557 kubelet[2664]: I0516 01:37:19.010501 2664 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 01:37:19.010896 kubelet[2664]: I0516 01:37:19.010830 2664 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 01:37:19.011376 kubelet[2664]: I0516 01:37:19.011061 2664 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 01:37:19.012315 kubelet[2664]: I0516 01:37:19.011324 2664 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-n-26e690edb8.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 01:37:19.012424 kubelet[2664]: I0516 01:37:19.012314 2664 topology_manager.go:138] "Creating topology manager with none policy" May 16 01:37:19.012424 kubelet[2664]: I0516 01:37:19.012342 2664 container_manager_linux.go:300] "Creating device plugin manager" May 16 01:37:19.012424 kubelet[2664]: I0516 01:37:19.012398 2664 state_mem.go:36] "Initialized new in-memory state store" May 16 01:37:19.012757 kubelet[2664]: I0516 01:37:19.012722 2664 kubelet.go:408] "Attempting to sync node with API server" May 16 01:37:19.013853 kubelet[2664]: I0516 01:37:19.013795 2664 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 01:37:19.014033 kubelet[2664]: I0516 01:37:19.013880 2664 kubelet.go:314] "Adding apiserver pod source" May 16 01:37:19.014033 kubelet[2664]: I0516 01:37:19.013918 2664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 01:37:19.023340 kubelet[2664]: I0516 01:37:19.019660 2664 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 01:37:19.023340 kubelet[2664]: I0516 01:37:19.020128 2664 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 01:37:19.023340 kubelet[2664]: I0516 01:37:19.020517 2664 server.go:1274] "Started kubelet" May 16 01:37:19.025178 kubelet[2664]: I0516 01:37:19.024271 2664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 01:37:19.035414 kubelet[2664]: I0516 01:37:19.035335 2664 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 01:37:19.043116 kubelet[2664]: I0516 01:37:19.043059 2664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 01:37:19.043363 kubelet[2664]: I0516 01:37:19.043344 2664 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 01:37:19.044723 kubelet[2664]: I0516 01:37:19.044692 2664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 01:37:19.048196 kubelet[2664]: I0516 01:37:19.048167 2664 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 01:37:19.048402 kubelet[2664]: E0516 01:37:19.048362 2664 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-26e690edb8.novalocal\" not found" May 16 01:37:19.055530 kubelet[2664]: I0516 01:37:19.055492 2664 server.go:449] "Adding debug handlers to kubelet server" May 16 01:37:19.057415 kubelet[2664]: I0516 01:37:19.057392 2664 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 01:37:19.058390 kubelet[2664]: I0516 01:37:19.057515 2664 reconciler.go:26] "Reconciler: start to sync state" May 16 01:37:19.074121 kubelet[2664]: I0516 01:37:19.074037 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 01:37:19.075897 kubelet[2664]: I0516 01:37:19.075370 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 01:37:19.075897 kubelet[2664]: I0516 01:37:19.075400 2664 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 01:37:19.075897 kubelet[2664]: I0516 01:37:19.075849 2664 kubelet.go:2321] "Starting kubelet main sync loop" May 16 01:37:19.076023 kubelet[2664]: E0516 01:37:19.075930 2664 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 01:37:19.078897 kubelet[2664]: I0516 01:37:19.078665 2664 factory.go:221] Registration of the systemd container factory successfully May 16 01:37:19.078897 kubelet[2664]: I0516 01:37:19.078748 2664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 01:37:19.084770 kubelet[2664]: I0516 01:37:19.083481 2664 factory.go:221] Registration of the containerd container factory successfully May 16 01:37:19.126016 sudo[2695]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 01:37:19.126540 sudo[2695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 01:37:19.154864 kubelet[2664]: I0516 01:37:19.154833 2664 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 01:37:19.154864 kubelet[2664]: I0516 01:37:19.154853 2664 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 01:37:19.154864 kubelet[2664]: I0516 01:37:19.154871 2664 state_mem.go:36] "Initialized new in-memory state store" May 16 01:37:19.155176 kubelet[2664]: I0516 01:37:19.155020 2664 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 01:37:19.155176 kubelet[2664]: I0516 01:37:19.155037 2664 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 01:37:19.155176 kubelet[2664]: I0516 01:37:19.155057 2664 policy_none.go:49] "None policy: Start" May 16 01:37:19.156913 kubelet[2664]: I0516 01:37:19.156885 2664 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 01:37:19.156961 kubelet[2664]: I0516 01:37:19.156924 2664 state_mem.go:35] "Initializing new in-memory state store" May 16 01:37:19.157356 kubelet[2664]: I0516 01:37:19.157063 2664 state_mem.go:75] "Updated machine memory state" May 16 01:37:19.175491 kubelet[2664]: I0516 01:37:19.173804 2664 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 01:37:19.175491 kubelet[2664]: I0516 01:37:19.173971 2664 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 01:37:19.175491 kubelet[2664]: I0516 01:37:19.173982 2664 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 01:37:19.175491 kubelet[2664]: I0516 01:37:19.174221 2664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 01:37:19.210543 kubelet[2664]: W0516 01:37:19.207031 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 01:37:19.210543 kubelet[2664]: E0516 01:37:19.207087 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.210543 kubelet[2664]: W0516 01:37:19.209013 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 01:37:19.210543 kubelet[2664]: E0516 01:37:19.209107 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.210543 kubelet[2664]: W0516 01:37:19.209207 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 01:37:19.259333 kubelet[2664]: I0516 01:37:19.259294 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.259416 kubelet[2664]: I0516 01:37:19.259335 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.259416 kubelet[2664]: I0516 01:37:19.259362 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.259416 kubelet[2664]: I0516 01:37:19.259381 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82c7c1119c6c8aebee23efc2a2f0ae47-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"82c7c1119c6c8aebee23efc2a2f0ae47\") " pod="kube-system/kube-scheduler-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.259416 kubelet[2664]: I0516 01:37:19.259401 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c217de342f43b5c9878b29ef979d750-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"8c217de342f43b5c9878b29ef979d750\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.259522 kubelet[2664]: I0516 01:37:19.259419 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.259522 kubelet[2664]: I0516 01:37:19.259443 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/351b880098763969412eb33474a71a92-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"351b880098763969412eb33474a71a92\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.259522 kubelet[2664]: I0516 01:37:19.259461 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c217de342f43b5c9878b29ef979d750-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"8c217de342f43b5c9878b29ef979d750\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.259522 kubelet[2664]: I0516 01:37:19.259479 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c217de342f43b5c9878b29ef979d750-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal\" (UID: \"8c217de342f43b5c9878b29ef979d750\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.285513 kubelet[2664]: I0516 01:37:19.285487 2664 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.306791 kubelet[2664]: I0516 01:37:19.306506 2664 kubelet_node_status.go:111] "Node was previously registered" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.306791 kubelet[2664]: I0516 01:37:19.306576 2664 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:19.726538 sudo[2695]: pam_unix(sudo:session): session closed for user root May 16 01:37:20.019759 kubelet[2664]: I0516 01:37:20.019653 2664 apiserver.go:52] "Watching apiserver" May 16 01:37:20.058114 kubelet[2664]: I0516 01:37:20.058067 2664 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 01:37:20.158631 kubelet[2664]: W0516 01:37:20.158609 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 01:37:20.158928 kubelet[2664]: E0516 01:37:20.158808 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal" May 16 01:37:20.210380 kubelet[2664]: I0516 01:37:20.210261 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-3-n-26e690edb8.novalocal" podStartSLOduration=1.210244694 podStartE2EDuration="1.210244694s" podCreationTimestamp="2025-05-16 01:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 01:37:20.188197127 +0000 UTC m=+1.343531905" watchObservedRunningTime="2025-05-16 01:37:20.210244694 +0000 UTC m=+1.365579452" May 16 01:37:20.220202 kubelet[2664]: I0516 01:37:20.220010 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-3-n-26e690edb8.novalocal" podStartSLOduration=5.219992255 podStartE2EDuration="5.219992255s" podCreationTimestamp="2025-05-16 01:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 01:37:20.210574932 +0000 UTC m=+1.365909700" watchObservedRunningTime="2025-05-16 01:37:20.219992255 +0000 UTC m=+1.375327013" May 16 01:37:20.230369 kubelet[2664]: I0516 01:37:20.230315 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-26e690edb8.novalocal" podStartSLOduration=5.23029655 podStartE2EDuration="5.23029655s" podCreationTimestamp="2025-05-16 01:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 01:37:20.220708386 +0000 UTC m=+1.376043164" watchObservedRunningTime="2025-05-16 01:37:20.23029655 +0000 UTC m=+1.385631318" May 16 01:37:21.660141 sudo[1719]: pam_unix(sudo:session): session closed for user root May 16 01:37:21.868081 sshd[1718]: Connection closed by 172.24.4.1 port 43340 May 16 01:37:21.869056 sshd-session[1716]: pam_unix(sshd:session): session closed for user core May 16 01:37:21.876447 systemd[1]: sshd@8-172.24.4.31:22-172.24.4.1:43340.service: Deactivated successfully. May 16 01:37:21.880445 systemd[1]: session-11.scope: Deactivated successfully. May 16 01:37:21.881028 systemd[1]: session-11.scope: Consumed 6.021s CPU time, 150.6M memory peak, 0B memory swap peak. May 16 01:37:21.882977 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. May 16 01:37:21.884888 systemd-logind[1441]: Removed session 11. May 16 01:37:23.648314 kubelet[2664]: I0516 01:37:23.648067 2664 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 01:37:23.649292 containerd[1464]: time="2025-05-16T01:37:23.648825870Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 01:37:23.650652 kubelet[2664]: I0516 01:37:23.650020 2664 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 01:37:24.203739 systemd[1]: Created slice kubepods-besteffort-pod451bed2b_9348_46d6_bb2b_28f6a6ee3110.slice - libcontainer container kubepods-besteffort-pod451bed2b_9348_46d6_bb2b_28f6a6ee3110.slice. May 16 01:37:24.208856 kubelet[2664]: W0516 01:37:24.208753 2664 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152-2-3-n-26e690edb8.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-26e690edb8.novalocal' and this object May 16 01:37:24.208856 kubelet[2664]: W0516 01:37:24.208798 2664 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152-2-3-n-26e690edb8.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-26e690edb8.novalocal' and this object May 16 01:37:24.208856 kubelet[2664]: E0516 01:37:24.208810 2664 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4152-2-3-n-26e690edb8.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152-2-3-n-26e690edb8.novalocal' and this object" logger="UnhandledError" May 16 01:37:24.208856 kubelet[2664]: E0516 01:37:24.208841 2664 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4152-2-3-n-26e690edb8.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152-2-3-n-26e690edb8.novalocal' and this object" logger="UnhandledError" May 16 01:37:24.269056 systemd[1]: Created slice kubepods-burstable-pod31428c3c_0e29_425b_8a25_e6b8974e40c1.slice - libcontainer container kubepods-burstable-pod31428c3c_0e29_425b_8a25_e6b8974e40c1.slice. May 16 01:37:24.270756 kubelet[2664]: W0516 01:37:24.269818 2664 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-3-n-26e690edb8.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-26e690edb8.novalocal' and this object May 16 01:37:24.270756 kubelet[2664]: E0516 01:37:24.269857 2664 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4152-2-3-n-26e690edb8.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152-2-3-n-26e690edb8.novalocal' and this object" logger="UnhandledError" May 16 01:37:24.270756 kubelet[2664]: W0516 01:37:24.269899 2664 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-3-n-26e690edb8.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-26e690edb8.novalocal' and this object May 16 01:37:24.270756 kubelet[2664]: E0516 01:37:24.269914 2664 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4152-2-3-n-26e690edb8.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152-2-3-n-26e690edb8.novalocal' and this object" logger="UnhandledError" May 16 01:37:24.290869 kubelet[2664]: I0516 01:37:24.290810 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-host-proc-sys-net\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.290869 kubelet[2664]: I0516 01:37:24.290864 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-bpf-maps\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291070 kubelet[2664]: I0516 01:37:24.290892 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-hostproc\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291070 kubelet[2664]: I0516 01:37:24.290933 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-etc-cni-netd\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291070 kubelet[2664]: I0516 01:37:24.290970 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-xtables-lock\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291070 kubelet[2664]: I0516 01:37:24.291000 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-host-proc-sys-kernel\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291070 kubelet[2664]: I0516 01:37:24.291041 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-cgroup\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291219 kubelet[2664]: I0516 01:37:24.291086 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cni-path\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291219 kubelet[2664]: I0516 01:37:24.291120 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/451bed2b-9348-46d6-bb2b-28f6a6ee3110-lib-modules\") pod \"kube-proxy-s97fv\" (UID: \"451bed2b-9348-46d6-bb2b-28f6a6ee3110\") " pod="kube-system/kube-proxy-s97fv" May 16 01:37:24.291219 kubelet[2664]: I0516 01:37:24.291147 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9vx\" (UniqueName: \"kubernetes.io/projected/451bed2b-9348-46d6-bb2b-28f6a6ee3110-kube-api-access-zm9vx\") pod \"kube-proxy-s97fv\" (UID: \"451bed2b-9348-46d6-bb2b-28f6a6ee3110\") " pod="kube-system/kube-proxy-s97fv" May 16 01:37:24.291219 kubelet[2664]: I0516 01:37:24.291169 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/451bed2b-9348-46d6-bb2b-28f6a6ee3110-xtables-lock\") pod \"kube-proxy-s97fv\" (UID: \"451bed2b-9348-46d6-bb2b-28f6a6ee3110\") " pod="kube-system/kube-proxy-s97fv" May 16 01:37:24.291219 kubelet[2664]: I0516 01:37:24.291195 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-hubble-tls\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291347 kubelet[2664]: I0516 01:37:24.291219 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/451bed2b-9348-46d6-bb2b-28f6a6ee3110-kube-proxy\") pod \"kube-proxy-s97fv\" (UID: \"451bed2b-9348-46d6-bb2b-28f6a6ee3110\") " pod="kube-system/kube-proxy-s97fv" May 16 01:37:24.291347 kubelet[2664]: I0516 01:37:24.291247 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8ndv\" (UniqueName: \"kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-kube-api-access-g8ndv\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291347 kubelet[2664]: I0516 01:37:24.291267 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-run\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291347 kubelet[2664]: I0516 01:37:24.291290 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-lib-modules\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291347 kubelet[2664]: I0516 01:37:24.291325 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31428c3c-0e29-425b-8a25-e6b8974e40c1-clustermesh-secrets\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.291472 kubelet[2664]: I0516 01:37:24.291356 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-config-path\") pod \"cilium-cv776\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " pod="kube-system/cilium-cv776" May 16 01:37:24.753548 systemd[1]: Created slice kubepods-besteffort-pod07e7820b_3ca5_49a7_b324_f7e817a58649.slice - libcontainer container kubepods-besteffort-pod07e7820b_3ca5_49a7_b324_f7e817a58649.slice. May 16 01:37:24.796120 kubelet[2664]: I0516 01:37:24.795989 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgffg\" (UniqueName: \"kubernetes.io/projected/07e7820b-3ca5-49a7-b324-f7e817a58649-kube-api-access-fgffg\") pod \"cilium-operator-5d85765b45-5vvs5\" (UID: \"07e7820b-3ca5-49a7-b324-f7e817a58649\") " pod="kube-system/cilium-operator-5d85765b45-5vvs5" May 16 01:37:24.796120 kubelet[2664]: I0516 01:37:24.796045 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07e7820b-3ca5-49a7-b324-f7e817a58649-cilium-config-path\") pod \"cilium-operator-5d85765b45-5vvs5\" (UID: \"07e7820b-3ca5-49a7-b324-f7e817a58649\") " pod="kube-system/cilium-operator-5d85765b45-5vvs5" May 16 01:37:25.393846 kubelet[2664]: E0516 01:37:25.393757 2664 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 16 01:37:25.393846 kubelet[2664]: E0516 01:37:25.393817 2664 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-cv776: failed to sync secret cache: timed out waiting for the condition May 16 01:37:25.394134 kubelet[2664]: E0516 01:37:25.393952 2664 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-hubble-tls podName:31428c3c-0e29-425b-8a25-e6b8974e40c1 nodeName:}" failed. No retries permitted until 2025-05-16 01:37:25.893909255 +0000 UTC m=+7.049244073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-hubble-tls") pod "cilium-cv776" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1") : failed to sync secret cache: timed out waiting for the condition May 16 01:37:25.416666 kubelet[2664]: E0516 01:37:25.416438 2664 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 16 01:37:25.416666 kubelet[2664]: E0516 01:37:25.416494 2664 projected.go:194] Error preparing data for projected volume kube-api-access-zm9vx for pod kube-system/kube-proxy-s97fv: failed to sync configmap cache: timed out waiting for the condition May 16 01:37:25.416666 kubelet[2664]: E0516 01:37:25.416616 2664 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/451bed2b-9348-46d6-bb2b-28f6a6ee3110-kube-api-access-zm9vx podName:451bed2b-9348-46d6-bb2b-28f6a6ee3110 nodeName:}" failed. No retries permitted until 2025-05-16 01:37:25.916548793 +0000 UTC m=+7.071883601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zm9vx" (UniqueName: "kubernetes.io/projected/451bed2b-9348-46d6-bb2b-28f6a6ee3110-kube-api-access-zm9vx") pod "kube-proxy-s97fv" (UID: "451bed2b-9348-46d6-bb2b-28f6a6ee3110") : failed to sync configmap cache: timed out waiting for the condition May 16 01:37:25.422192 kubelet[2664]: E0516 01:37:25.422024 2664 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 16 01:37:25.422192 kubelet[2664]: E0516 01:37:25.422070 2664 projected.go:194] Error preparing data for projected volume kube-api-access-g8ndv for pod kube-system/cilium-cv776: failed to sync configmap cache: timed out waiting for the condition May 16 01:37:25.422192 kubelet[2664]: E0516 01:37:25.422142 2664 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-kube-api-access-g8ndv podName:31428c3c-0e29-425b-8a25-e6b8974e40c1 nodeName:}" failed. No retries permitted until 2025-05-16 01:37:25.92211315 +0000 UTC m=+7.077447958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g8ndv" (UniqueName: "kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-kube-api-access-g8ndv") pod "cilium-cv776" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1") : failed to sync configmap cache: timed out waiting for the condition May 16 01:37:25.663324 containerd[1464]: time="2025-05-16T01:37:25.663046568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5vvs5,Uid:07e7820b-3ca5-49a7-b324-f7e817a58649,Namespace:kube-system,Attempt:0,}" May 16 01:37:25.870870 containerd[1464]: time="2025-05-16T01:37:25.870332869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 01:37:25.870870 containerd[1464]: time="2025-05-16T01:37:25.870685089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 01:37:25.871232 containerd[1464]: time="2025-05-16T01:37:25.871078116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:25.872697 containerd[1464]: time="2025-05-16T01:37:25.872379995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:25.914620 systemd[1]: run-containerd-runc-k8s.io-bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d-runc.3BHVkT.mount: Deactivated successfully. May 16 01:37:25.921980 systemd[1]: Started cri-containerd-bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d.scope - libcontainer container bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d. May 16 01:37:25.965217 containerd[1464]: time="2025-05-16T01:37:25.965169566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5vvs5,Uid:07e7820b-3ca5-49a7-b324-f7e817a58649,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\"" May 16 01:37:25.967802 containerd[1464]: time="2025-05-16T01:37:25.967654142Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 01:37:26.013075 containerd[1464]: time="2025-05-16T01:37:26.013028786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s97fv,Uid:451bed2b-9348-46d6-bb2b-28f6a6ee3110,Namespace:kube-system,Attempt:0,}" May 16 01:37:26.073825 containerd[1464]: time="2025-05-16T01:37:26.073412613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cv776,Uid:31428c3c-0e29-425b-8a25-e6b8974e40c1,Namespace:kube-system,Attempt:0,}" May 16 01:37:26.078704 containerd[1464]: time="2025-05-16T01:37:26.078269737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 01:37:26.079839 containerd[1464]: time="2025-05-16T01:37:26.079762465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 01:37:26.079839 containerd[1464]: time="2025-05-16T01:37:26.079803941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:26.084128 containerd[1464]: time="2025-05-16T01:37:26.080107982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:26.113912 systemd[1]: Started cri-containerd-362ecec6048a71a2299671caf1b9bfbba4153c17716e2f727ad918e8c7a4ba9f.scope - libcontainer container 362ecec6048a71a2299671caf1b9bfbba4153c17716e2f727ad918e8c7a4ba9f. May 16 01:37:26.116354 containerd[1464]: time="2025-05-16T01:37:26.116009787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 01:37:26.116487 containerd[1464]: time="2025-05-16T01:37:26.116077303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 01:37:26.116724 containerd[1464]: time="2025-05-16T01:37:26.116576639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:26.117318 containerd[1464]: time="2025-05-16T01:37:26.117267433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:26.141940 systemd[1]: Started cri-containerd-4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9.scope - libcontainer container 4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9. May 16 01:37:26.161171 containerd[1464]: time="2025-05-16T01:37:26.161115340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s97fv,Uid:451bed2b-9348-46d6-bb2b-28f6a6ee3110,Namespace:kube-system,Attempt:0,} returns sandbox id \"362ecec6048a71a2299671caf1b9bfbba4153c17716e2f727ad918e8c7a4ba9f\"" May 16 01:37:26.168895 containerd[1464]: time="2025-05-16T01:37:26.167358861Z" level=info msg="CreateContainer within sandbox \"362ecec6048a71a2299671caf1b9bfbba4153c17716e2f727ad918e8c7a4ba9f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 01:37:26.178818 containerd[1464]: time="2025-05-16T01:37:26.178638560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cv776,Uid:31428c3c-0e29-425b-8a25-e6b8974e40c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\"" May 16 01:37:26.203415 containerd[1464]: time="2025-05-16T01:37:26.203377634Z" level=info msg="CreateContainer within sandbox \"362ecec6048a71a2299671caf1b9bfbba4153c17716e2f727ad918e8c7a4ba9f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"42b2e6ea0eb36872b45bc55082a6a43b37c3f0b48d0100f71156b336343e44f3\"" May 16 01:37:26.204672 containerd[1464]: time="2025-05-16T01:37:26.204622196Z" level=info msg="StartContainer for \"42b2e6ea0eb36872b45bc55082a6a43b37c3f0b48d0100f71156b336343e44f3\"" May 16 01:37:26.235835 systemd[1]: Started cri-containerd-42b2e6ea0eb36872b45bc55082a6a43b37c3f0b48d0100f71156b336343e44f3.scope - libcontainer container 42b2e6ea0eb36872b45bc55082a6a43b37c3f0b48d0100f71156b336343e44f3. May 16 01:37:26.269199 containerd[1464]: time="2025-05-16T01:37:26.269155290Z" level=info msg="StartContainer for \"42b2e6ea0eb36872b45bc55082a6a43b37c3f0b48d0100f71156b336343e44f3\" returns successfully" May 16 01:37:28.098922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1174249108.mount: Deactivated successfully. May 16 01:37:30.634255 kubelet[2664]: I0516 01:37:30.632694 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s97fv" podStartSLOduration=6.632651719 podStartE2EDuration="6.632651719s" podCreationTimestamp="2025-05-16 01:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 01:37:27.18196218 +0000 UTC m=+8.337296998" watchObservedRunningTime="2025-05-16 01:37:30.632651719 +0000 UTC m=+11.787986567" May 16 01:37:34.330730 containerd[1464]: time="2025-05-16T01:37:34.330674103Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:34.332183 containerd[1464]: time="2025-05-16T01:37:34.332028553Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 01:37:34.333375 containerd[1464]: time="2025-05-16T01:37:34.333315395Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:34.335035 containerd[1464]: time="2025-05-16T01:37:34.334920413Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.367102924s" May 16 01:37:34.335035 containerd[1464]: time="2025-05-16T01:37:34.334956741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 01:37:34.337382 containerd[1464]: time="2025-05-16T01:37:34.337248378Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 01:37:34.337673 containerd[1464]: time="2025-05-16T01:37:34.337639531Z" level=info msg="CreateContainer within sandbox \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 01:37:34.361352 containerd[1464]: time="2025-05-16T01:37:34.361282845Z" level=info msg="CreateContainer within sandbox \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\"" May 16 01:37:34.361792 containerd[1464]: time="2025-05-16T01:37:34.361763095Z" level=info msg="StartContainer for \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\"" May 16 01:37:34.396777 systemd[1]: Started cri-containerd-1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744.scope - libcontainer container 1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744. May 16 01:37:34.428266 containerd[1464]: time="2025-05-16T01:37:34.428224329Z" level=info msg="StartContainer for \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\" returns successfully" May 16 01:37:40.023999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567905043.mount: Deactivated successfully. May 16 01:37:43.512554 containerd[1464]: time="2025-05-16T01:37:43.512342032Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:43.514889 containerd[1464]: time="2025-05-16T01:37:43.514783340Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 01:37:43.516454 containerd[1464]: time="2025-05-16T01:37:43.516343725Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 01:37:43.523894 containerd[1464]: time="2025-05-16T01:37:43.523801740Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.186414542s" May 16 01:37:43.524150 containerd[1464]: time="2025-05-16T01:37:43.523925061Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 01:37:43.529272 containerd[1464]: time="2025-05-16T01:37:43.529204671Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 01:37:43.565292 containerd[1464]: time="2025-05-16T01:37:43.565042247Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\"" May 16 01:37:43.567948 containerd[1464]: time="2025-05-16T01:37:43.566714553Z" level=info msg="StartContainer for \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\"" May 16 01:37:43.677661 systemd[1]: run-containerd-runc-k8s.io-95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8-runc.dxSL3D.mount: Deactivated successfully. May 16 01:37:43.686754 systemd[1]: Started cri-containerd-95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8.scope - libcontainer container 95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8. May 16 01:37:43.732978 containerd[1464]: time="2025-05-16T01:37:43.732911499Z" level=info msg="StartContainer for \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\" returns successfully" May 16 01:37:43.743399 systemd[1]: cri-containerd-95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8.scope: Deactivated successfully. May 16 01:37:44.419416 kubelet[2664]: I0516 01:37:44.418388 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5vvs5" podStartSLOduration=12.048657116 podStartE2EDuration="20.418347871s" podCreationTimestamp="2025-05-16 01:37:24 +0000 UTC" firstStartedPulling="2025-05-16 01:37:25.966540776 +0000 UTC m=+7.121875534" lastFinishedPulling="2025-05-16 01:37:34.336231521 +0000 UTC m=+15.491566289" observedRunningTime="2025-05-16 01:37:35.250697144 +0000 UTC m=+16.406031902" watchObservedRunningTime="2025-05-16 01:37:44.418347871 +0000 UTC m=+25.573682740" May 16 01:37:44.556403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8-rootfs.mount: Deactivated successfully. May 16 01:37:44.792100 containerd[1464]: time="2025-05-16T01:37:44.791796674Z" level=info msg="shim disconnected" id=95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8 namespace=k8s.io May 16 01:37:44.792100 containerd[1464]: time="2025-05-16T01:37:44.791903604Z" level=warning msg="cleaning up after shim disconnected" id=95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8 namespace=k8s.io May 16 01:37:44.792100 containerd[1464]: time="2025-05-16T01:37:44.791936646Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:37:45.232747 containerd[1464]: time="2025-05-16T01:37:45.231426767Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 01:37:45.337659 containerd[1464]: time="2025-05-16T01:37:45.330814388Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\"" May 16 01:37:45.337659 containerd[1464]: time="2025-05-16T01:37:45.333699578Z" level=info msg="StartContainer for \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\"" May 16 01:37:45.336006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316720547.mount: Deactivated successfully. May 16 01:37:45.392730 systemd[1]: Started cri-containerd-cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540.scope - libcontainer container cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540. May 16 01:37:45.424053 containerd[1464]: time="2025-05-16T01:37:45.423936662Z" level=info msg="StartContainer for \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\" returns successfully" May 16 01:37:45.432751 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 01:37:45.433498 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 01:37:45.433713 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 01:37:45.440261 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 01:37:45.440749 systemd[1]: cri-containerd-cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540.scope: Deactivated successfully. May 16 01:37:45.465978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 01:37:45.474888 containerd[1464]: time="2025-05-16T01:37:45.474822858Z" level=info msg="shim disconnected" id=cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540 namespace=k8s.io May 16 01:37:45.474888 containerd[1464]: time="2025-05-16T01:37:45.474876338Z" level=warning msg="cleaning up after shim disconnected" id=cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540 namespace=k8s.io May 16 01:37:45.474888 containerd[1464]: time="2025-05-16T01:37:45.474886477Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:37:45.551405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540-rootfs.mount: Deactivated successfully. May 16 01:37:46.242715 containerd[1464]: time="2025-05-16T01:37:46.242150778Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 01:37:46.286213 containerd[1464]: time="2025-05-16T01:37:46.286084619Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\"" May 16 01:37:46.288291 containerd[1464]: time="2025-05-16T01:37:46.287437877Z" level=info msg="StartContainer for \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\"" May 16 01:37:46.343760 systemd[1]: Started cri-containerd-a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2.scope - libcontainer container a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2. May 16 01:37:46.379569 systemd[1]: cri-containerd-a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2.scope: Deactivated successfully. May 16 01:37:46.380755 containerd[1464]: time="2025-05-16T01:37:46.380718730Z" level=info msg="StartContainer for \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\" returns successfully" May 16 01:37:46.404949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2-rootfs.mount: Deactivated successfully. May 16 01:37:46.414036 containerd[1464]: time="2025-05-16T01:37:46.413954531Z" level=info msg="shim disconnected" id=a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2 namespace=k8s.io May 16 01:37:46.414216 containerd[1464]: time="2025-05-16T01:37:46.414040702Z" level=warning msg="cleaning up after shim disconnected" id=a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2 namespace=k8s.io May 16 01:37:46.414216 containerd[1464]: time="2025-05-16T01:37:46.414058676Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:37:47.250065 containerd[1464]: time="2025-05-16T01:37:47.249720530Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 01:37:47.305198 containerd[1464]: time="2025-05-16T01:37:47.305072573Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\"" May 16 01:37:47.306768 containerd[1464]: time="2025-05-16T01:37:47.306129505Z" level=info msg="StartContainer for \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\"" May 16 01:37:47.351767 systemd[1]: Started cri-containerd-68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde.scope - libcontainer container 68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde. May 16 01:37:47.377209 systemd[1]: cri-containerd-68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde.scope: Deactivated successfully. May 16 01:37:47.383523 containerd[1464]: time="2025-05-16T01:37:47.383489686Z" level=info msg="StartContainer for \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\" returns successfully" May 16 01:37:47.403143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde-rootfs.mount: Deactivated successfully. May 16 01:37:47.410944 containerd[1464]: time="2025-05-16T01:37:47.410844400Z" level=info msg="shim disconnected" id=68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde namespace=k8s.io May 16 01:37:47.410944 containerd[1464]: time="2025-05-16T01:37:47.410939799Z" level=warning msg="cleaning up after shim disconnected" id=68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde namespace=k8s.io May 16 01:37:47.411061 containerd[1464]: time="2025-05-16T01:37:47.410952393Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:37:48.261573 containerd[1464]: time="2025-05-16T01:37:48.260436024Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 01:37:48.333882 containerd[1464]: time="2025-05-16T01:37:48.333810653Z" level=info msg="CreateContainer within sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\"" May 16 01:37:48.334640 containerd[1464]: time="2025-05-16T01:37:48.334356526Z" level=info msg="StartContainer for \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\"" May 16 01:37:48.377811 systemd[1]: Started cri-containerd-070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60.scope - libcontainer container 070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60. May 16 01:37:48.418789 containerd[1464]: time="2025-05-16T01:37:48.418624543Z" level=info msg="StartContainer for \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\" returns successfully" May 16 01:37:48.539063 kubelet[2664]: I0516 01:37:48.538952 2664 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 01:37:48.595884 systemd[1]: Created slice kubepods-burstable-podd0d07b1d_dcad_4f5e_b9fc_569bf7b8b9c0.slice - libcontainer container kubepods-burstable-podd0d07b1d_dcad_4f5e_b9fc_569bf7b8b9c0.slice. May 16 01:37:48.603835 systemd[1]: Created slice kubepods-burstable-pod4245082f_ac0b_4e33_9326_e26dbdb543b1.slice - libcontainer container kubepods-burstable-pod4245082f_ac0b_4e33_9326_e26dbdb543b1.slice. May 16 01:37:48.687454 kubelet[2664]: I0516 01:37:48.687152 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdtr5\" (UniqueName: \"kubernetes.io/projected/4245082f-ac0b-4e33-9326-e26dbdb543b1-kube-api-access-gdtr5\") pod \"coredns-7c65d6cfc9-khj27\" (UID: \"4245082f-ac0b-4e33-9326-e26dbdb543b1\") " pod="kube-system/coredns-7c65d6cfc9-khj27" May 16 01:37:48.687454 kubelet[2664]: I0516 01:37:48.687202 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jjkc\" (UniqueName: \"kubernetes.io/projected/d0d07b1d-dcad-4f5e-b9fc-569bf7b8b9c0-kube-api-access-9jjkc\") pod \"coredns-7c65d6cfc9-52pm7\" (UID: \"d0d07b1d-dcad-4f5e-b9fc-569bf7b8b9c0\") " pod="kube-system/coredns-7c65d6cfc9-52pm7" May 16 01:37:48.687454 kubelet[2664]: I0516 01:37:48.687224 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d07b1d-dcad-4f5e-b9fc-569bf7b8b9c0-config-volume\") pod \"coredns-7c65d6cfc9-52pm7\" (UID: \"d0d07b1d-dcad-4f5e-b9fc-569bf7b8b9c0\") " pod="kube-system/coredns-7c65d6cfc9-52pm7" May 16 01:37:48.687454 kubelet[2664]: I0516 01:37:48.687243 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4245082f-ac0b-4e33-9326-e26dbdb543b1-config-volume\") pod \"coredns-7c65d6cfc9-khj27\" (UID: \"4245082f-ac0b-4e33-9326-e26dbdb543b1\") " pod="kube-system/coredns-7c65d6cfc9-khj27" May 16 01:37:48.900479 containerd[1464]: time="2025-05-16T01:37:48.900428146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-52pm7,Uid:d0d07b1d-dcad-4f5e-b9fc-569bf7b8b9c0,Namespace:kube-system,Attempt:0,}" May 16 01:37:48.911686 containerd[1464]: time="2025-05-16T01:37:48.911643086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-khj27,Uid:4245082f-ac0b-4e33-9326-e26dbdb543b1,Namespace:kube-system,Attempt:0,}" May 16 01:37:49.325780 kubelet[2664]: I0516 01:37:49.320989 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cv776" podStartSLOduration=7.97633096 podStartE2EDuration="25.320929232s" podCreationTimestamp="2025-05-16 01:37:24 +0000 UTC" firstStartedPulling="2025-05-16 01:37:26.180890771 +0000 UTC m=+7.336225659" lastFinishedPulling="2025-05-16 01:37:43.525489123 +0000 UTC m=+24.680823931" observedRunningTime="2025-05-16 01:37:49.320143108 +0000 UTC m=+30.475477966" watchObservedRunningTime="2025-05-16 01:37:49.320929232 +0000 UTC m=+30.476264040" May 16 01:37:50.628174 systemd-networkd[1365]: cilium_host: Link UP May 16 01:37:50.633331 systemd-networkd[1365]: cilium_net: Link UP May 16 01:37:50.634068 systemd-networkd[1365]: cilium_net: Gained carrier May 16 01:37:50.634231 systemd-networkd[1365]: cilium_host: Gained carrier May 16 01:37:50.751192 systemd-networkd[1365]: cilium_vxlan: Link UP May 16 01:37:50.751196 systemd-networkd[1365]: cilium_vxlan: Gained carrier May 16 01:37:51.120657 kernel: NET: Registered PF_ALG protocol family May 16 01:37:51.154731 systemd-networkd[1365]: cilium_host: Gained IPv6LL May 16 01:37:51.474825 systemd-networkd[1365]: cilium_net: Gained IPv6LL May 16 01:37:52.062144 systemd-networkd[1365]: lxc_health: Link UP May 16 01:37:52.091330 systemd-networkd[1365]: lxc_health: Gained carrier May 16 01:37:52.510575 systemd-networkd[1365]: lxc2b89e0e8fc00: Link UP May 16 01:37:52.515884 systemd-networkd[1365]: lxcc09004774f0a: Link UP May 16 01:37:52.522734 kernel: eth0: renamed from tmpc30e4 May 16 01:37:52.532817 kernel: eth0: renamed from tmp5d858 May 16 01:37:52.548444 systemd-networkd[1365]: lxcc09004774f0a: Gained carrier May 16 01:37:52.550798 systemd-networkd[1365]: lxc2b89e0e8fc00: Gained carrier May 16 01:37:52.754756 systemd-networkd[1365]: cilium_vxlan: Gained IPv6LL May 16 01:37:53.714029 systemd-networkd[1365]: lxc2b89e0e8fc00: Gained IPv6LL May 16 01:37:54.033838 systemd-networkd[1365]: lxc_health: Gained IPv6LL May 16 01:37:54.291102 systemd-networkd[1365]: lxcc09004774f0a: Gained IPv6LL May 16 01:37:57.054122 containerd[1464]: time="2025-05-16T01:37:57.053655750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 01:37:57.054122 containerd[1464]: time="2025-05-16T01:37:57.053748746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 01:37:57.054122 containerd[1464]: time="2025-05-16T01:37:57.053770214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:57.054122 containerd[1464]: time="2025-05-16T01:37:57.053933202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:57.093017 systemd[1]: Started cri-containerd-5d8589c70de094a90e26607756556b11ad65314c57d26cd2dad33362c70998a3.scope - libcontainer container 5d8589c70de094a90e26607756556b11ad65314c57d26cd2dad33362c70998a3. May 16 01:37:57.155756 containerd[1464]: time="2025-05-16T01:37:57.155249034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 01:37:57.155756 containerd[1464]: time="2025-05-16T01:37:57.155679177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 01:37:57.155756 containerd[1464]: time="2025-05-16T01:37:57.155703139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:57.156783 containerd[1464]: time="2025-05-16T01:37:57.155813155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:37:57.160841 containerd[1464]: time="2025-05-16T01:37:57.160812952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-52pm7,Uid:d0d07b1d-dcad-4f5e-b9fc-569bf7b8b9c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d8589c70de094a90e26607756556b11ad65314c57d26cd2dad33362c70998a3\"" May 16 01:37:57.172713 containerd[1464]: time="2025-05-16T01:37:57.172679971Z" level=info msg="CreateContainer within sandbox \"5d8589c70de094a90e26607756556b11ad65314c57d26cd2dad33362c70998a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 01:37:57.190840 systemd[1]: Started cri-containerd-c30e4e6d3f55f4f3f2415d8ffade600bed40abe1cb1df684daeb44cc6b238ca4.scope - libcontainer container c30e4e6d3f55f4f3f2415d8ffade600bed40abe1cb1df684daeb44cc6b238ca4. May 16 01:37:57.202122 containerd[1464]: time="2025-05-16T01:37:57.202076531Z" level=info msg="CreateContainer within sandbox \"5d8589c70de094a90e26607756556b11ad65314c57d26cd2dad33362c70998a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74bed224dfc7ce0f63c68a90200cf3d38d814b042c1e67503f50965f3c4e5f50\"" May 16 01:37:57.203988 containerd[1464]: time="2025-05-16T01:37:57.203799486Z" level=info msg="StartContainer for \"74bed224dfc7ce0f63c68a90200cf3d38d814b042c1e67503f50965f3c4e5f50\"" May 16 01:37:57.246804 systemd[1]: Started cri-containerd-74bed224dfc7ce0f63c68a90200cf3d38d814b042c1e67503f50965f3c4e5f50.scope - libcontainer container 74bed224dfc7ce0f63c68a90200cf3d38d814b042c1e67503f50965f3c4e5f50. May 16 01:37:57.269809 containerd[1464]: time="2025-05-16T01:37:57.269771169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-khj27,Uid:4245082f-ac0b-4e33-9326-e26dbdb543b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c30e4e6d3f55f4f3f2415d8ffade600bed40abe1cb1df684daeb44cc6b238ca4\"" May 16 01:37:57.273550 containerd[1464]: time="2025-05-16T01:37:57.273144252Z" level=info msg="CreateContainer within sandbox \"c30e4e6d3f55f4f3f2415d8ffade600bed40abe1cb1df684daeb44cc6b238ca4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 01:37:57.291895 containerd[1464]: time="2025-05-16T01:37:57.291849578Z" level=info msg="StartContainer for \"74bed224dfc7ce0f63c68a90200cf3d38d814b042c1e67503f50965f3c4e5f50\" returns successfully" May 16 01:37:57.306868 containerd[1464]: time="2025-05-16T01:37:57.306686845Z" level=info msg="CreateContainer within sandbox \"c30e4e6d3f55f4f3f2415d8ffade600bed40abe1cb1df684daeb44cc6b238ca4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa1fcd50dbe0544f53e9a0576e903f812759882625680480f5a02eafa3b50a86\"" May 16 01:37:57.308710 containerd[1464]: time="2025-05-16T01:37:57.308670463Z" level=info msg="StartContainer for \"fa1fcd50dbe0544f53e9a0576e903f812759882625680480f5a02eafa3b50a86\"" May 16 01:37:57.340976 kubelet[2664]: I0516 01:37:57.340910 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-52pm7" podStartSLOduration=33.340892715 podStartE2EDuration="33.340892715s" podCreationTimestamp="2025-05-16 01:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 01:37:57.338962632 +0000 UTC m=+38.494297390" watchObservedRunningTime="2025-05-16 01:37:57.340892715 +0000 UTC m=+38.496227493" May 16 01:37:57.369823 systemd[1]: Started cri-containerd-fa1fcd50dbe0544f53e9a0576e903f812759882625680480f5a02eafa3b50a86.scope - libcontainer container fa1fcd50dbe0544f53e9a0576e903f812759882625680480f5a02eafa3b50a86. May 16 01:37:57.402701 containerd[1464]: time="2025-05-16T01:37:57.402656013Z" level=info msg="StartContainer for \"fa1fcd50dbe0544f53e9a0576e903f812759882625680480f5a02eafa3b50a86\" returns successfully" May 16 01:37:58.352659 kubelet[2664]: I0516 01:37:58.352463 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-khj27" podStartSLOduration=34.352426339 podStartE2EDuration="34.352426339s" podCreationTimestamp="2025-05-16 01:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 01:37:58.346563053 +0000 UTC m=+39.501897891" watchObservedRunningTime="2025-05-16 01:37:58.352426339 +0000 UTC m=+39.507761147" May 16 01:41:12.562332 systemd[1]: Started sshd@9-172.24.4.31:22-172.24.4.1:50136.service - OpenSSH per-connection server daemon (172.24.4.1:50136). May 16 01:41:13.901687 sshd[4067]: Accepted publickey for core from 172.24.4.1 port 50136 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:13.905268 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:13.926192 systemd-logind[1441]: New session 12 of user core. May 16 01:41:13.936202 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 01:41:14.676419 sshd[4071]: Connection closed by 172.24.4.1 port 50136 May 16 01:41:14.677206 sshd-session[4067]: pam_unix(sshd:session): session closed for user core May 16 01:41:14.686896 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. May 16 01:41:14.688534 systemd[1]: sshd@9-172.24.4.31:22-172.24.4.1:50136.service: Deactivated successfully. May 16 01:41:14.712435 systemd[1]: session-12.scope: Deactivated successfully. May 16 01:41:14.719020 systemd-logind[1441]: Removed session 12. May 16 01:41:19.714460 systemd[1]: Started sshd@10-172.24.4.31:22-172.24.4.1:41598.service - OpenSSH per-connection server daemon (172.24.4.1:41598). May 16 01:41:21.030191 sshd[4086]: Accepted publickey for core from 172.24.4.1 port 41598 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:21.032893 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:21.056111 systemd-logind[1441]: New session 13 of user core. May 16 01:41:21.063039 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 01:41:21.935531 sshd[4091]: Connection closed by 172.24.4.1 port 41598 May 16 01:41:21.935960 sshd-session[4086]: pam_unix(sshd:session): session closed for user core May 16 01:41:21.953007 systemd[1]: sshd@10-172.24.4.31:22-172.24.4.1:41598.service: Deactivated successfully. May 16 01:41:21.953771 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. May 16 01:41:21.961349 systemd[1]: session-13.scope: Deactivated successfully. May 16 01:41:21.964953 systemd-logind[1441]: Removed session 13. May 16 01:41:26.963269 systemd[1]: Started sshd@11-172.24.4.31:22-172.24.4.1:45522.service - OpenSSH per-connection server daemon (172.24.4.1:45522). May 16 01:41:28.283270 sshd[4104]: Accepted publickey for core from 172.24.4.1 port 45522 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:28.286432 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:28.299732 systemd-logind[1441]: New session 14 of user core. May 16 01:41:28.310305 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 01:41:29.227909 sshd[4106]: Connection closed by 172.24.4.1 port 45522 May 16 01:41:29.229215 sshd-session[4104]: pam_unix(sshd:session): session closed for user core May 16 01:41:29.236152 systemd[1]: sshd@11-172.24.4.31:22-172.24.4.1:45522.service: Deactivated successfully. May 16 01:41:29.245380 systemd[1]: session-14.scope: Deactivated successfully. May 16 01:41:29.249015 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. May 16 01:41:29.253375 systemd-logind[1441]: Removed session 14. May 16 01:41:34.258711 systemd[1]: Started sshd@12-172.24.4.31:22-172.24.4.1:57886.service - OpenSSH per-connection server daemon (172.24.4.1:57886). May 16 01:41:35.673660 sshd[4118]: Accepted publickey for core from 172.24.4.1 port 57886 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:35.676818 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:35.692749 systemd-logind[1441]: New session 15 of user core. May 16 01:41:35.704959 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 01:41:36.423408 sshd[4120]: Connection closed by 172.24.4.1 port 57886 May 16 01:41:36.427058 sshd-session[4118]: pam_unix(sshd:session): session closed for user core May 16 01:41:36.440818 systemd[1]: sshd@12-172.24.4.31:22-172.24.4.1:57886.service: Deactivated successfully. May 16 01:41:36.445293 systemd[1]: session-15.scope: Deactivated successfully. May 16 01:41:36.450003 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. May 16 01:41:36.461956 systemd[1]: Started sshd@13-172.24.4.31:22-172.24.4.1:57890.service - OpenSSH per-connection server daemon (172.24.4.1:57890). May 16 01:41:36.466884 systemd-logind[1441]: Removed session 15. May 16 01:41:37.850370 sshd[4132]: Accepted publickey for core from 172.24.4.1 port 57890 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:37.853532 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:37.865373 systemd-logind[1441]: New session 16 of user core. May 16 01:41:37.875969 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 01:41:38.729850 sshd[4134]: Connection closed by 172.24.4.1 port 57890 May 16 01:41:38.729646 sshd-session[4132]: pam_unix(sshd:session): session closed for user core May 16 01:41:38.749784 systemd[1]: sshd@13-172.24.4.31:22-172.24.4.1:57890.service: Deactivated successfully. May 16 01:41:38.756055 systemd[1]: session-16.scope: Deactivated successfully. May 16 01:41:38.759070 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. May 16 01:41:38.771877 systemd[1]: Started sshd@14-172.24.4.31:22-172.24.4.1:57898.service - OpenSSH per-connection server daemon (172.24.4.1:57898). May 16 01:41:38.775940 systemd-logind[1441]: Removed session 16. May 16 01:41:40.525148 sshd[4143]: Accepted publickey for core from 172.24.4.1 port 57898 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:40.531533 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:40.546708 systemd-logind[1441]: New session 17 of user core. May 16 01:41:40.571579 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 01:41:41.244838 sshd[4146]: Connection closed by 172.24.4.1 port 57898 May 16 01:41:41.245463 sshd-session[4143]: pam_unix(sshd:session): session closed for user core May 16 01:41:41.256477 systemd[1]: sshd@14-172.24.4.31:22-172.24.4.1:57898.service: Deactivated successfully. May 16 01:41:41.262853 systemd[1]: session-17.scope: Deactivated successfully. May 16 01:41:41.265465 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. May 16 01:41:41.268118 systemd-logind[1441]: Removed session 17. May 16 01:41:46.275231 systemd[1]: Started sshd@15-172.24.4.31:22-172.24.4.1:35984.service - OpenSSH per-connection server daemon (172.24.4.1:35984). May 16 01:41:47.522013 sshd[4157]: Accepted publickey for core from 172.24.4.1 port 35984 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:47.526521 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:47.541433 systemd-logind[1441]: New session 18 of user core. May 16 01:41:47.550999 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 01:41:48.411710 sshd[4159]: Connection closed by 172.24.4.1 port 35984 May 16 01:41:48.412251 sshd-session[4157]: pam_unix(sshd:session): session closed for user core May 16 01:41:48.424267 systemd[1]: sshd@15-172.24.4.31:22-172.24.4.1:35984.service: Deactivated successfully. May 16 01:41:48.434772 systemd[1]: session-18.scope: Deactivated successfully. May 16 01:41:48.438759 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. May 16 01:41:48.441543 systemd-logind[1441]: Removed session 18. May 16 01:41:53.445287 systemd[1]: Started sshd@16-172.24.4.31:22-172.24.4.1:35996.service - OpenSSH per-connection server daemon (172.24.4.1:35996). May 16 01:41:54.773643 sshd[4170]: Accepted publickey for core from 172.24.4.1 port 35996 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:54.777500 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:54.791229 systemd-logind[1441]: New session 19 of user core. May 16 01:41:54.799947 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 01:41:55.505654 sshd[4172]: Connection closed by 172.24.4.1 port 35996 May 16 01:41:55.506883 sshd-session[4170]: pam_unix(sshd:session): session closed for user core May 16 01:41:55.518536 systemd[1]: sshd@16-172.24.4.31:22-172.24.4.1:35996.service: Deactivated successfully. May 16 01:41:55.525738 systemd[1]: session-19.scope: Deactivated successfully. May 16 01:41:55.527775 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. May 16 01:41:55.539299 systemd[1]: Started sshd@17-172.24.4.31:22-172.24.4.1:36686.service - OpenSSH per-connection server daemon (172.24.4.1:36686). May 16 01:41:55.543322 systemd-logind[1441]: Removed session 19. May 16 01:41:56.763704 sshd[4183]: Accepted publickey for core from 172.24.4.1 port 36686 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:56.770063 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:56.781327 systemd-logind[1441]: New session 20 of user core. May 16 01:41:56.789005 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 01:41:57.597661 sshd[4187]: Connection closed by 172.24.4.1 port 36686 May 16 01:41:57.598898 sshd-session[4183]: pam_unix(sshd:session): session closed for user core May 16 01:41:57.608713 systemd[1]: sshd@17-172.24.4.31:22-172.24.4.1:36686.service: Deactivated successfully. May 16 01:41:57.614507 systemd[1]: session-20.scope: Deactivated successfully. May 16 01:41:57.620806 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. May 16 01:41:57.631212 systemd[1]: Started sshd@18-172.24.4.31:22-172.24.4.1:36692.service - OpenSSH per-connection server daemon (172.24.4.1:36692). May 16 01:41:57.635928 systemd-logind[1441]: Removed session 20. May 16 01:41:58.915871 sshd[4196]: Accepted publickey for core from 172.24.4.1 port 36692 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:41:58.920711 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:41:58.936175 systemd-logind[1441]: New session 21 of user core. May 16 01:41:58.945966 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 01:42:02.227837 sshd[4198]: Connection closed by 172.24.4.1 port 36692 May 16 01:42:02.231276 sshd-session[4196]: pam_unix(sshd:session): session closed for user core May 16 01:42:02.253190 systemd[1]: sshd@18-172.24.4.31:22-172.24.4.1:36692.service: Deactivated successfully. May 16 01:42:02.260788 systemd[1]: session-21.scope: Deactivated successfully. May 16 01:42:02.267514 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. May 16 01:42:02.281213 systemd[1]: Started sshd@19-172.24.4.31:22-172.24.4.1:36698.service - OpenSSH per-connection server daemon (172.24.4.1:36698). May 16 01:42:02.284118 systemd-logind[1441]: Removed session 21. May 16 01:42:03.522053 sshd[4214]: Accepted publickey for core from 172.24.4.1 port 36698 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:42:03.526315 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:42:03.544822 systemd-logind[1441]: New session 22 of user core. May 16 01:42:03.552977 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 01:42:04.420444 sshd[4216]: Connection closed by 172.24.4.1 port 36698 May 16 01:42:04.422425 sshd-session[4214]: pam_unix(sshd:session): session closed for user core May 16 01:42:04.447320 systemd[1]: sshd@19-172.24.4.31:22-172.24.4.1:36698.service: Deactivated successfully. May 16 01:42:04.454097 systemd[1]: session-22.scope: Deactivated successfully. May 16 01:42:04.457160 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. May 16 01:42:04.470384 systemd[1]: Started sshd@20-172.24.4.31:22-172.24.4.1:51704.service - OpenSSH per-connection server daemon (172.24.4.1:51704). May 16 01:42:04.476231 systemd-logind[1441]: Removed session 22. May 16 01:42:05.760785 sshd[4225]: Accepted publickey for core from 172.24.4.1 port 51704 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:42:05.763929 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:42:05.771837 systemd-logind[1441]: New session 23 of user core. May 16 01:42:05.782844 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 01:42:06.490971 sshd[4227]: Connection closed by 172.24.4.1 port 51704 May 16 01:42:06.492393 sshd-session[4225]: pam_unix(sshd:session): session closed for user core May 16 01:42:06.502443 systemd[1]: sshd@20-172.24.4.31:22-172.24.4.1:51704.service: Deactivated successfully. May 16 01:42:06.507378 systemd[1]: session-23.scope: Deactivated successfully. May 16 01:42:06.509906 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. May 16 01:42:06.512812 systemd-logind[1441]: Removed session 23. May 16 01:42:11.512027 systemd[1]: Started sshd@21-172.24.4.31:22-172.24.4.1:51706.service - OpenSSH per-connection server daemon (172.24.4.1:51706). May 16 01:42:12.922009 sshd[4241]: Accepted publickey for core from 172.24.4.1 port 51706 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:42:12.925289 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:42:12.941971 systemd-logind[1441]: New session 24 of user core. May 16 01:42:12.948153 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 01:42:13.776706 sshd[4244]: Connection closed by 172.24.4.1 port 51706 May 16 01:42:13.776459 sshd-session[4241]: pam_unix(sshd:session): session closed for user core May 16 01:42:13.782215 systemd[1]: sshd@21-172.24.4.31:22-172.24.4.1:51706.service: Deactivated successfully. May 16 01:42:13.789193 systemd[1]: session-24.scope: Deactivated successfully. May 16 01:42:13.794127 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. May 16 01:42:13.796970 systemd-logind[1441]: Removed session 24. May 16 01:42:18.801244 systemd[1]: Started sshd@22-172.24.4.31:22-172.24.4.1:37202.service - OpenSSH per-connection server daemon (172.24.4.1:37202). May 16 01:42:20.157838 sshd[4254]: Accepted publickey for core from 172.24.4.1 port 37202 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:42:20.159203 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:42:20.172756 systemd-logind[1441]: New session 25 of user core. May 16 01:42:20.177909 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 01:42:20.973179 sshd[4258]: Connection closed by 172.24.4.1 port 37202 May 16 01:42:20.974792 sshd-session[4254]: pam_unix(sshd:session): session closed for user core May 16 01:42:20.983544 systemd[1]: sshd@22-172.24.4.31:22-172.24.4.1:37202.service: Deactivated successfully. May 16 01:42:20.988429 systemd[1]: session-25.scope: Deactivated successfully. May 16 01:42:20.990448 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. May 16 01:42:20.993726 systemd-logind[1441]: Removed session 25. May 16 01:42:26.004399 systemd[1]: Started sshd@23-172.24.4.31:22-172.24.4.1:52626.service - OpenSSH per-connection server daemon (172.24.4.1:52626). May 16 01:42:27.282512 sshd[4269]: Accepted publickey for core from 172.24.4.1 port 52626 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:42:27.285976 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:42:27.299463 systemd-logind[1441]: New session 26 of user core. May 16 01:42:27.311945 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 01:42:27.978925 sshd[4273]: Connection closed by 172.24.4.1 port 52626 May 16 01:42:27.981074 sshd-session[4269]: pam_unix(sshd:session): session closed for user core May 16 01:42:27.993113 systemd[1]: sshd@23-172.24.4.31:22-172.24.4.1:52626.service: Deactivated successfully. May 16 01:42:27.998208 systemd[1]: session-26.scope: Deactivated successfully. May 16 01:42:28.001026 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. May 16 01:42:28.014126 systemd[1]: Started sshd@24-172.24.4.31:22-172.24.4.1:52638.service - OpenSSH per-connection server daemon (172.24.4.1:52638). May 16 01:42:28.018108 systemd-logind[1441]: Removed session 26. May 16 01:42:29.296789 sshd[4284]: Accepted publickey for core from 172.24.4.1 port 52638 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:42:29.300349 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:42:29.313688 systemd-logind[1441]: New session 27 of user core. May 16 01:42:29.324010 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 01:42:31.323027 containerd[1464]: time="2025-05-16T01:42:31.322909247Z" level=info msg="StopContainer for \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\" with timeout 30 (s)" May 16 01:42:31.327611 containerd[1464]: time="2025-05-16T01:42:31.326001424Z" level=info msg="Stop container \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\" with signal terminated" May 16 01:42:31.348875 systemd[1]: cri-containerd-1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744.scope: Deactivated successfully. May 16 01:42:31.349162 systemd[1]: cri-containerd-1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744.scope: Consumed 1.280s CPU time. May 16 01:42:31.387163 containerd[1464]: time="2025-05-16T01:42:31.387066832Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 01:42:31.392559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744-rootfs.mount: Deactivated successfully. May 16 01:42:31.400686 containerd[1464]: time="2025-05-16T01:42:31.400574797Z" level=info msg="StopContainer for \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\" with timeout 2 (s)" May 16 01:42:31.401548 containerd[1464]: time="2025-05-16T01:42:31.401525399Z" level=info msg="Stop container \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\" with signal terminated" May 16 01:42:31.424135 systemd-networkd[1365]: lxc_health: Link DOWN May 16 01:42:31.426763 containerd[1464]: time="2025-05-16T01:42:31.425022249Z" level=info msg="shim disconnected" id=1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744 namespace=k8s.io May 16 01:42:31.426763 containerd[1464]: time="2025-05-16T01:42:31.425165920Z" level=warning msg="cleaning up after shim disconnected" id=1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744 namespace=k8s.io May 16 01:42:31.426763 containerd[1464]: time="2025-05-16T01:42:31.425186518Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:42:31.424146 systemd-networkd[1365]: lxc_health: Lost carrier May 16 01:42:31.446947 systemd[1]: cri-containerd-070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60.scope: Deactivated successfully. May 16 01:42:31.447408 systemd[1]: cri-containerd-070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60.scope: Consumed 10.566s CPU time. May 16 01:42:31.486675 containerd[1464]: time="2025-05-16T01:42:31.486623777Z" level=info msg="StopContainer for \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\" returns successfully" May 16 01:42:31.488468 containerd[1464]: time="2025-05-16T01:42:31.488199005Z" level=info msg="StopPodSandbox for \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\"" May 16 01:42:31.488468 containerd[1464]: time="2025-05-16T01:42:31.488252306Z" level=info msg="Container to stop \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 01:42:31.493617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d-shm.mount: Deactivated successfully. May 16 01:42:31.504810 systemd[1]: cri-containerd-bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d.scope: Deactivated successfully. May 16 01:42:31.511956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60-rootfs.mount: Deactivated successfully. May 16 01:42:31.529090 containerd[1464]: time="2025-05-16T01:42:31.529008941Z" level=info msg="shim disconnected" id=070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60 namespace=k8s.io May 16 01:42:31.529475 containerd[1464]: time="2025-05-16T01:42:31.529099302Z" level=warning msg="cleaning up after shim disconnected" id=070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60 namespace=k8s.io May 16 01:42:31.529475 containerd[1464]: time="2025-05-16T01:42:31.529113248Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:42:31.547826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d-rootfs.mount: Deactivated successfully. May 16 01:42:31.558971 containerd[1464]: time="2025-05-16T01:42:31.558887628Z" level=info msg="shim disconnected" id=bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d namespace=k8s.io May 16 01:42:31.558971 containerd[1464]: time="2025-05-16T01:42:31.558952230Z" level=warning msg="cleaning up after shim disconnected" id=bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d namespace=k8s.io May 16 01:42:31.558971 containerd[1464]: time="2025-05-16T01:42:31.558963951Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:42:31.568501 containerd[1464]: time="2025-05-16T01:42:31.568261313Z" level=info msg="StopContainer for \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\" returns successfully" May 16 01:42:31.571087 containerd[1464]: time="2025-05-16T01:42:31.571050278Z" level=info msg="StopPodSandbox for \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\"" May 16 01:42:31.571172 containerd[1464]: time="2025-05-16T01:42:31.571100523Z" level=info msg="Container to stop \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 01:42:31.571172 containerd[1464]: time="2025-05-16T01:42:31.571140168Z" level=info msg="Container to stop \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 01:42:31.571172 containerd[1464]: time="2025-05-16T01:42:31.571154865Z" level=info msg="Container to stop \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 01:42:31.571172 containerd[1464]: time="2025-05-16T01:42:31.571166327Z" level=info msg="Container to stop \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 01:42:31.571550 containerd[1464]: time="2025-05-16T01:42:31.571177439Z" level=info msg="Container to stop \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 01:42:31.595029 systemd[1]: cri-containerd-4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9.scope: Deactivated successfully. May 16 01:42:31.606159 containerd[1464]: time="2025-05-16T01:42:31.605462559Z" level=info msg="TearDown network for sandbox \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\" successfully" May 16 01:42:31.606159 containerd[1464]: time="2025-05-16T01:42:31.605526940Z" level=info msg="StopPodSandbox for \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\" returns successfully" May 16 01:42:31.656924 containerd[1464]: time="2025-05-16T01:42:31.656636305Z" level=info msg="shim disconnected" id=4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9 namespace=k8s.io May 16 01:42:31.656924 containerd[1464]: time="2025-05-16T01:42:31.656780196Z" level=warning msg="cleaning up after shim disconnected" id=4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9 namespace=k8s.io May 16 01:42:31.656924 containerd[1464]: time="2025-05-16T01:42:31.656829059Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:42:31.678981 containerd[1464]: time="2025-05-16T01:42:31.678478216Z" level=info msg="TearDown network for sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" successfully" May 16 01:42:31.678981 containerd[1464]: time="2025-05-16T01:42:31.678520776Z" level=info msg="StopPodSandbox for \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" returns successfully" May 16 01:42:31.714499 kubelet[2664]: I0516 01:42:31.714367 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-host-proc-sys-kernel\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.714499 kubelet[2664]: I0516 01:42:31.714479 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-hostproc\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715219 kubelet[2664]: I0516 01:42:31.714529 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgffg\" (UniqueName: \"kubernetes.io/projected/07e7820b-3ca5-49a7-b324-f7e817a58649-kube-api-access-fgffg\") pod \"07e7820b-3ca5-49a7-b324-f7e817a58649\" (UID: \"07e7820b-3ca5-49a7-b324-f7e817a58649\") " May 16 01:42:31.715219 kubelet[2664]: I0516 01:42:31.714567 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-host-proc-sys-net\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715219 kubelet[2664]: I0516 01:42:31.714649 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31428c3c-0e29-425b-8a25-e6b8974e40c1-clustermesh-secrets\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715219 kubelet[2664]: I0516 01:42:31.714683 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-bpf-maps\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715219 kubelet[2664]: I0516 01:42:31.714730 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-run\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715219 kubelet[2664]: I0516 01:42:31.714760 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-lib-modules\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715438 kubelet[2664]: I0516 01:42:31.714795 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07e7820b-3ca5-49a7-b324-f7e817a58649-cilium-config-path\") pod \"07e7820b-3ca5-49a7-b324-f7e817a58649\" (UID: \"07e7820b-3ca5-49a7-b324-f7e817a58649\") " May 16 01:42:31.715438 kubelet[2664]: I0516 01:42:31.714831 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-hubble-tls\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715438 kubelet[2664]: I0516 01:42:31.714861 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-etc-cni-netd\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715438 kubelet[2664]: I0516 01:42:31.714891 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-xtables-lock\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715438 kubelet[2664]: I0516 01:42:31.714939 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8ndv\" (UniqueName: \"kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-kube-api-access-g8ndv\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715438 kubelet[2664]: I0516 01:42:31.714986 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-config-path\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715765 kubelet[2664]: I0516 01:42:31.715017 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-cgroup\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715765 kubelet[2664]: I0516 01:42:31.715056 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cni-path\") pod \"31428c3c-0e29-425b-8a25-e6b8974e40c1\" (UID: \"31428c3c-0e29-425b-8a25-e6b8974e40c1\") " May 16 01:42:31.715765 kubelet[2664]: I0516 01:42:31.715272 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cni-path" (OuterVolumeSpecName: "cni-path") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.715765 kubelet[2664]: I0516 01:42:31.715374 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.715765 kubelet[2664]: I0516 01:42:31.715419 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-hostproc" (OuterVolumeSpecName: "hostproc") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.720068 kubelet[2664]: I0516 01:42:31.720026 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.724653 kubelet[2664]: I0516 01:42:31.723843 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.724653 kubelet[2664]: I0516 01:42:31.723903 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.724653 kubelet[2664]: I0516 01:42:31.723937 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.724653 kubelet[2664]: I0516 01:42:31.724279 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.724962 kubelet[2664]: I0516 01:42:31.724763 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.727655 kubelet[2664]: I0516 01:42:31.726722 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e7820b-3ca5-49a7-b324-f7e817a58649-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "07e7820b-3ca5-49a7-b324-f7e817a58649" (UID: "07e7820b-3ca5-49a7-b324-f7e817a58649"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 01:42:31.731693 kubelet[2664]: I0516 01:42:31.731568 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 01:42:31.731925 kubelet[2664]: I0516 01:42:31.731903 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e7820b-3ca5-49a7-b324-f7e817a58649-kube-api-access-fgffg" (OuterVolumeSpecName: "kube-api-access-fgffg") pod "07e7820b-3ca5-49a7-b324-f7e817a58649" (UID: "07e7820b-3ca5-49a7-b324-f7e817a58649"). InnerVolumeSpecName "kube-api-access-fgffg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 01:42:31.735773 kubelet[2664]: I0516 01:42:31.735739 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 01:42:31.736961 kubelet[2664]: I0516 01:42:31.736897 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-kube-api-access-g8ndv" (OuterVolumeSpecName: "kube-api-access-g8ndv") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "kube-api-access-g8ndv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 01:42:31.738030 kubelet[2664]: I0516 01:42:31.738004 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31428c3c-0e29-425b-8a25-e6b8974e40c1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 01:42:31.738707 kubelet[2664]: I0516 01:42:31.738667 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "31428c3c-0e29-425b-8a25-e6b8974e40c1" (UID: "31428c3c-0e29-425b-8a25-e6b8974e40c1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 01:42:31.816038 kubelet[2664]: I0516 01:42:31.815801 2664 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-host-proc-sys-kernel\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816038 kubelet[2664]: I0516 01:42:31.815838 2664 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-hostproc\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816038 kubelet[2664]: I0516 01:42:31.815851 2664 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgffg\" (UniqueName: \"kubernetes.io/projected/07e7820b-3ca5-49a7-b324-f7e817a58649-kube-api-access-fgffg\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816038 kubelet[2664]: I0516 01:42:31.815864 2664 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-host-proc-sys-net\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816038 kubelet[2664]: I0516 01:42:31.815875 2664 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31428c3c-0e29-425b-8a25-e6b8974e40c1-clustermesh-secrets\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816038 kubelet[2664]: I0516 01:42:31.815887 2664 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-bpf-maps\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816038 kubelet[2664]: I0516 01:42:31.815897 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-run\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816410 kubelet[2664]: I0516 01:42:31.815907 2664 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-lib-modules\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816410 kubelet[2664]: I0516 01:42:31.815917 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07e7820b-3ca5-49a7-b324-f7e817a58649-cilium-config-path\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816410 kubelet[2664]: I0516 01:42:31.815928 2664 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-hubble-tls\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816410 kubelet[2664]: I0516 01:42:31.815951 2664 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-xtables-lock\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816410 kubelet[2664]: I0516 01:42:31.815962 2664 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8ndv\" (UniqueName: \"kubernetes.io/projected/31428c3c-0e29-425b-8a25-e6b8974e40c1-kube-api-access-g8ndv\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816410 kubelet[2664]: I0516 01:42:31.815972 2664 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-etc-cni-netd\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816410 kubelet[2664]: I0516 01:42:31.815984 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-config-path\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816716 kubelet[2664]: I0516 01:42:31.816002 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cilium-cgroup\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:31.816716 kubelet[2664]: I0516 01:42:31.816012 2664 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31428c3c-0e29-425b-8a25-e6b8974e40c1-cni-path\") on node \"ci-4152-2-3-n-26e690edb8.novalocal\" DevicePath \"\"" May 16 01:42:32.345949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9-rootfs.mount: Deactivated successfully. May 16 01:42:32.346232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9-shm.mount: Deactivated successfully. May 16 01:42:32.346425 systemd[1]: var-lib-kubelet-pods-31428c3c\x2d0e29\x2d425b\x2d8a25\x2de6b8974e40c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg8ndv.mount: Deactivated successfully. May 16 01:42:32.347341 systemd[1]: var-lib-kubelet-pods-31428c3c\x2d0e29\x2d425b\x2d8a25\x2de6b8974e40c1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 01:42:32.347819 systemd[1]: var-lib-kubelet-pods-07e7820b\x2d3ca5\x2d49a7\x2db324\x2df7e817a58649-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfgffg.mount: Deactivated successfully. May 16 01:42:32.348004 systemd[1]: var-lib-kubelet-pods-31428c3c\x2d0e29\x2d425b\x2d8a25\x2de6b8974e40c1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 01:42:32.452041 kubelet[2664]: I0516 01:42:32.451741 2664 scope.go:117] "RemoveContainer" containerID="1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744" May 16 01:42:32.465731 containerd[1464]: time="2025-05-16T01:42:32.463355208Z" level=info msg="RemoveContainer for \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\"" May 16 01:42:32.481009 systemd[1]: Removed slice kubepods-besteffort-pod07e7820b_3ca5_49a7_b324_f7e817a58649.slice - libcontainer container kubepods-besteffort-pod07e7820b_3ca5_49a7_b324_f7e817a58649.slice. May 16 01:42:32.482051 systemd[1]: kubepods-besteffort-pod07e7820b_3ca5_49a7_b324_f7e817a58649.slice: Consumed 1.305s CPU time. May 16 01:42:32.491705 systemd[1]: Removed slice kubepods-burstable-pod31428c3c_0e29_425b_8a25_e6b8974e40c1.slice - libcontainer container kubepods-burstable-pod31428c3c_0e29_425b_8a25_e6b8974e40c1.slice. May 16 01:42:32.492128 systemd[1]: kubepods-burstable-pod31428c3c_0e29_425b_8a25_e6b8974e40c1.slice: Consumed 10.664s CPU time. May 16 01:42:32.496438 containerd[1464]: time="2025-05-16T01:42:32.495754079Z" level=info msg="RemoveContainer for \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\" returns successfully" May 16 01:42:32.497747 kubelet[2664]: I0516 01:42:32.496957 2664 scope.go:117] "RemoveContainer" containerID="1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744" May 16 01:42:32.499003 containerd[1464]: time="2025-05-16T01:42:32.498625029Z" level=error msg="ContainerStatus for \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\": not found" May 16 01:42:32.500549 kubelet[2664]: E0516 01:42:32.500124 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\": not found" containerID="1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744" May 16 01:42:32.502717 kubelet[2664]: I0516 01:42:32.501484 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744"} err="failed to get container status \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\": rpc error: code = NotFound desc = an error occurred when try to find container \"1cbd2c43f46565b93389d09aef32b7973918a735edef87b7534764b9c3142744\": not found" May 16 01:42:32.502717 kubelet[2664]: I0516 01:42:32.502025 2664 scope.go:117] "RemoveContainer" containerID="070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60" May 16 01:42:32.509294 containerd[1464]: time="2025-05-16T01:42:32.509183655Z" level=info msg="RemoveContainer for \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\"" May 16 01:42:32.529794 containerd[1464]: time="2025-05-16T01:42:32.528859903Z" level=info msg="RemoveContainer for \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\" returns successfully" May 16 01:42:32.531026 kubelet[2664]: I0516 01:42:32.530401 2664 scope.go:117] "RemoveContainer" containerID="68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde" May 16 01:42:32.535331 containerd[1464]: time="2025-05-16T01:42:32.535130128Z" level=info msg="RemoveContainer for \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\"" May 16 01:42:32.541982 containerd[1464]: time="2025-05-16T01:42:32.541768637Z" level=info msg="RemoveContainer for \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\" returns successfully" May 16 01:42:32.542434 kubelet[2664]: I0516 01:42:32.542016 2664 scope.go:117] "RemoveContainer" containerID="a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2" May 16 01:42:32.544697 containerd[1464]: time="2025-05-16T01:42:32.544661669Z" level=info msg="RemoveContainer for \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\"" May 16 01:42:32.551709 containerd[1464]: time="2025-05-16T01:42:32.551366783Z" level=info msg="RemoveContainer for \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\" returns successfully" May 16 01:42:32.552724 kubelet[2664]: I0516 01:42:32.552597 2664 scope.go:117] "RemoveContainer" containerID="cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540" May 16 01:42:32.555269 containerd[1464]: time="2025-05-16T01:42:32.555170391Z" level=info msg="RemoveContainer for \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\"" May 16 01:42:32.561660 containerd[1464]: time="2025-05-16T01:42:32.559902858Z" level=info msg="RemoveContainer for \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\" returns successfully" May 16 01:42:32.561801 kubelet[2664]: I0516 01:42:32.560102 2664 scope.go:117] "RemoveContainer" containerID="95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8" May 16 01:42:32.564659 containerd[1464]: time="2025-05-16T01:42:32.563545141Z" level=info msg="RemoveContainer for \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\"" May 16 01:42:32.571666 containerd[1464]: time="2025-05-16T01:42:32.571500643Z" level=info msg="RemoveContainer for \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\" returns successfully" May 16 01:42:32.573128 kubelet[2664]: I0516 01:42:32.572363 2664 scope.go:117] "RemoveContainer" containerID="070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60" May 16 01:42:32.574745 containerd[1464]: time="2025-05-16T01:42:32.574053012Z" level=error msg="ContainerStatus for \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\": not found" May 16 01:42:32.575872 kubelet[2664]: E0516 01:42:32.575365 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\": not found" containerID="070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60" May 16 01:42:32.575872 kubelet[2664]: I0516 01:42:32.575634 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60"} err="failed to get container status \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\": rpc error: code = NotFound desc = an error occurred when try to find container \"070b681782ba3bb71a3b5d6aa5e96ba38976302e619c21c74fbeb2008cf83d60\": not found" May 16 01:42:32.575872 kubelet[2664]: I0516 01:42:32.575684 2664 scope.go:117] "RemoveContainer" containerID="68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde" May 16 01:42:32.576831 containerd[1464]: time="2025-05-16T01:42:32.576245714Z" level=error msg="ContainerStatus for \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\": not found" May 16 01:42:32.577560 kubelet[2664]: E0516 01:42:32.577520 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\": not found" containerID="68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde" May 16 01:42:32.577648 kubelet[2664]: I0516 01:42:32.577569 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde"} err="failed to get container status \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\": rpc error: code = NotFound desc = an error occurred when try to find container \"68022f20ef3fc12cc4461ab346fda1ec2a6c6f3005312787fecf43255d7fbfde\": not found" May 16 01:42:32.577648 kubelet[2664]: I0516 01:42:32.577615 2664 scope.go:117] "RemoveContainer" containerID="a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2" May 16 01:42:32.578068 containerd[1464]: time="2025-05-16T01:42:32.577939085Z" level=error msg="ContainerStatus for \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\": not found" May 16 01:42:32.578187 kubelet[2664]: E0516 01:42:32.578131 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\": not found" containerID="a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2" May 16 01:42:32.578277 kubelet[2664]: I0516 01:42:32.578194 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2"} err="failed to get container status \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2a99bc74137562981aca05e9de136e16f967a33dccebaa6edc0a0bb4276c7c2\": not found" May 16 01:42:32.578277 kubelet[2664]: I0516 01:42:32.578212 2664 scope.go:117] "RemoveContainer" containerID="cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540" May 16 01:42:32.578653 containerd[1464]: time="2025-05-16T01:42:32.578555857Z" level=error msg="ContainerStatus for \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\": not found" May 16 01:42:32.580380 containerd[1464]: time="2025-05-16T01:42:32.579251869Z" level=error msg="ContainerStatus for \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\": not found" May 16 01:42:32.580453 kubelet[2664]: E0516 01:42:32.578806 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\": not found" containerID="cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540" May 16 01:42:32.580453 kubelet[2664]: I0516 01:42:32.578830 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540"} err="failed to get container status \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfbef090724045725d2ed11378ca8f160a6f6ebee9de5eb152aa2d2f5e257540\": not found" May 16 01:42:32.580453 kubelet[2664]: I0516 01:42:32.578848 2664 scope.go:117] "RemoveContainer" containerID="95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8" May 16 01:42:32.580853 kubelet[2664]: E0516 01:42:32.580809 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\": not found" containerID="95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8" May 16 01:42:32.580917 kubelet[2664]: I0516 01:42:32.580889 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8"} err="failed to get container status \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"95b4a4238de3618a26807c25586973a11ec473cbc69c0a4d97616300320b83d8\": not found" May 16 01:42:33.083641 kubelet[2664]: I0516 01:42:33.083378 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e7820b-3ca5-49a7-b324-f7e817a58649" path="/var/lib/kubelet/pods/07e7820b-3ca5-49a7-b324-f7e817a58649/volumes" May 16 01:42:33.086067 kubelet[2664]: I0516 01:42:33.086004 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31428c3c-0e29-425b-8a25-e6b8974e40c1" path="/var/lib/kubelet/pods/31428c3c-0e29-425b-8a25-e6b8974e40c1/volumes" May 16 01:42:33.475835 sshd[4286]: Connection closed by 172.24.4.1 port 52638 May 16 01:42:33.482040 sshd-session[4284]: pam_unix(sshd:session): session closed for user core May 16 01:42:33.502522 systemd[1]: sshd@24-172.24.4.31:22-172.24.4.1:52638.service: Deactivated successfully. May 16 01:42:33.508740 systemd[1]: session-27.scope: Deactivated successfully. May 16 01:42:33.510852 systemd-logind[1441]: Session 27 logged out. Waiting for processes to exit. May 16 01:42:33.525368 systemd[1]: Started sshd@25-172.24.4.31:22-172.24.4.1:52642.service - OpenSSH per-connection server daemon (172.24.4.1:52642). May 16 01:42:33.530521 systemd-logind[1441]: Removed session 27. May 16 01:42:34.319544 kubelet[2664]: E0516 01:42:34.319414 2664 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 01:42:34.735399 sshd[4444]: Accepted publickey for core from 172.24.4.1 port 52642 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:42:34.736877 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:42:34.749856 systemd-logind[1441]: New session 28 of user core. May 16 01:42:34.762947 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 01:42:36.067948 kubelet[2664]: I0516 01:42:36.065567 2664 setters.go:600] "Node became not ready" node="ci-4152-2-3-n-26e690edb8.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T01:42:36Z","lastTransitionTime":"2025-05-16T01:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 01:42:36.568910 kubelet[2664]: E0516 01:42:36.568807 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07e7820b-3ca5-49a7-b324-f7e817a58649" containerName="cilium-operator" May 16 01:42:36.568910 kubelet[2664]: E0516 01:42:36.568890 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31428c3c-0e29-425b-8a25-e6b8974e40c1" containerName="mount-cgroup" May 16 01:42:36.568910 kubelet[2664]: E0516 01:42:36.568900 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31428c3c-0e29-425b-8a25-e6b8974e40c1" containerName="apply-sysctl-overwrites" May 16 01:42:36.568910 kubelet[2664]: E0516 01:42:36.568908 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31428c3c-0e29-425b-8a25-e6b8974e40c1" containerName="clean-cilium-state" May 16 01:42:36.568910 kubelet[2664]: E0516 01:42:36.568917 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31428c3c-0e29-425b-8a25-e6b8974e40c1" containerName="cilium-agent" May 16 01:42:36.568910 kubelet[2664]: E0516 01:42:36.568928 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31428c3c-0e29-425b-8a25-e6b8974e40c1" containerName="mount-bpf-fs" May 16 01:42:36.569374 kubelet[2664]: I0516 01:42:36.569008 2664 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e7820b-3ca5-49a7-b324-f7e817a58649" containerName="cilium-operator" May 16 01:42:36.569374 kubelet[2664]: I0516 01:42:36.569022 2664 memory_manager.go:354] "RemoveStaleState removing state" podUID="31428c3c-0e29-425b-8a25-e6b8974e40c1" containerName="cilium-agent" May 16 01:42:36.587249 systemd[1]: Created slice kubepods-burstable-pod41f99ba2_119f_40b9_b3a6_d36f86a0dc90.slice - libcontainer container kubepods-burstable-pod41f99ba2_119f_40b9_b3a6_d36f86a0dc90.slice. May 16 01:42:36.656952 kubelet[2664]: I0516 01:42:36.656672 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-host-proc-sys-net\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.656952 kubelet[2664]: I0516 01:42:36.656744 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crb28\" (UniqueName: \"kubernetes.io/projected/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-kube-api-access-crb28\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.656952 kubelet[2664]: I0516 01:42:36.656769 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-bpf-maps\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.656952 kubelet[2664]: I0516 01:42:36.656795 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-hostproc\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.656952 kubelet[2664]: I0516 01:42:36.656821 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-cilium-ipsec-secrets\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.656952 kubelet[2664]: I0516 01:42:36.656852 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-hubble-tls\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.657390 kubelet[2664]: I0516 01:42:36.656955 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-etc-cni-netd\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.657390 kubelet[2664]: I0516 01:42:36.656981 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-xtables-lock\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.657390 kubelet[2664]: I0516 01:42:36.657008 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-cilium-config-path\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.657390 kubelet[2664]: I0516 01:42:36.657033 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-cilium-cgroup\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.657390 kubelet[2664]: I0516 01:42:36.657051 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-lib-modules\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.657390 kubelet[2664]: I0516 01:42:36.657087 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-clustermesh-secrets\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.657641 kubelet[2664]: I0516 01:42:36.657111 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-host-proc-sys-kernel\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.657641 kubelet[2664]: I0516 01:42:36.657134 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-cni-path\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.657641 kubelet[2664]: I0516 01:42:36.657167 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41f99ba2-119f-40b9-b3a6-d36f86a0dc90-cilium-run\") pod \"cilium-fv6tf\" (UID: \"41f99ba2-119f-40b9-b3a6-d36f86a0dc90\") " pod="kube-system/cilium-fv6tf" May 16 01:42:36.747677 sshd[4446]: Connection closed by 172.24.4.1 port 52642 May 16 01:42:36.749114 sshd-session[4444]: pam_unix(sshd:session): session closed for user core May 16 01:42:36.767541 systemd[1]: sshd@25-172.24.4.31:22-172.24.4.1:52642.service: Deactivated successfully. May 16 01:42:36.778848 systemd[1]: session-28.scope: Deactivated successfully. May 16 01:42:36.780522 systemd[1]: session-28.scope: Consumed 1.170s CPU time. May 16 01:42:36.784467 systemd-logind[1441]: Session 28 logged out. Waiting for processes to exit. May 16 01:42:36.798665 systemd[1]: Started sshd@26-172.24.4.31:22-172.24.4.1:46948.service - OpenSSH per-connection server daemon (172.24.4.1:46948). May 16 01:42:36.847883 systemd-logind[1441]: Removed session 28. May 16 01:42:36.896956 containerd[1464]: time="2025-05-16T01:42:36.896742974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fv6tf,Uid:41f99ba2-119f-40b9-b3a6-d36f86a0dc90,Namespace:kube-system,Attempt:0,}" May 16 01:42:36.937697 containerd[1464]: time="2025-05-16T01:42:36.936501892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 01:42:36.937697 containerd[1464]: time="2025-05-16T01:42:36.936631015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 01:42:36.937697 containerd[1464]: time="2025-05-16T01:42:36.936646334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:42:36.937697 containerd[1464]: time="2025-05-16T01:42:36.936747795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 01:42:36.964845 systemd[1]: Started cri-containerd-33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59.scope - libcontainer container 33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59. May 16 01:42:36.997699 containerd[1464]: time="2025-05-16T01:42:36.997406153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fv6tf,Uid:41f99ba2-119f-40b9-b3a6-d36f86a0dc90,Namespace:kube-system,Attempt:0,} returns sandbox id \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\"" May 16 01:42:37.006895 containerd[1464]: time="2025-05-16T01:42:37.005897097Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 01:42:37.026255 containerd[1464]: time="2025-05-16T01:42:37.026189391Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"63a15f6e5cd7bb4b38379c47e8bbb11620958ae8d09ad0d525e5f71cdc5d938b\"" May 16 01:42:37.028512 containerd[1464]: time="2025-05-16T01:42:37.027108822Z" level=info msg="StartContainer for \"63a15f6e5cd7bb4b38379c47e8bbb11620958ae8d09ad0d525e5f71cdc5d938b\"" May 16 01:42:37.061838 systemd[1]: Started cri-containerd-63a15f6e5cd7bb4b38379c47e8bbb11620958ae8d09ad0d525e5f71cdc5d938b.scope - libcontainer container 63a15f6e5cd7bb4b38379c47e8bbb11620958ae8d09ad0d525e5f71cdc5d938b. May 16 01:42:37.105955 containerd[1464]: time="2025-05-16T01:42:37.105826259Z" level=info msg="StartContainer for \"63a15f6e5cd7bb4b38379c47e8bbb11620958ae8d09ad0d525e5f71cdc5d938b\" returns successfully" May 16 01:42:37.116086 systemd[1]: cri-containerd-63a15f6e5cd7bb4b38379c47e8bbb11620958ae8d09ad0d525e5f71cdc5d938b.scope: Deactivated successfully. May 16 01:42:37.167362 containerd[1464]: time="2025-05-16T01:42:37.167064351Z" level=info msg="shim disconnected" id=63a15f6e5cd7bb4b38379c47e8bbb11620958ae8d09ad0d525e5f71cdc5d938b namespace=k8s.io May 16 01:42:37.167362 containerd[1464]: time="2025-05-16T01:42:37.167210085Z" level=warning msg="cleaning up after shim disconnected" id=63a15f6e5cd7bb4b38379c47e8bbb11620958ae8d09ad0d525e5f71cdc5d938b namespace=k8s.io May 16 01:42:37.167362 containerd[1464]: time="2025-05-16T01:42:37.167234671Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:42:37.527162 containerd[1464]: time="2025-05-16T01:42:37.526172135Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 01:42:37.610617 containerd[1464]: time="2025-05-16T01:42:37.609679855Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa4ff9322f6252972710984a59c5ca6cdf1bb2a5c66819583887eb3747b1393f\"" May 16 01:42:37.616228 containerd[1464]: time="2025-05-16T01:42:37.616147098Z" level=info msg="StartContainer for \"aa4ff9322f6252972710984a59c5ca6cdf1bb2a5c66819583887eb3747b1393f\"" May 16 01:42:37.652797 systemd[1]: Started cri-containerd-aa4ff9322f6252972710984a59c5ca6cdf1bb2a5c66819583887eb3747b1393f.scope - libcontainer container aa4ff9322f6252972710984a59c5ca6cdf1bb2a5c66819583887eb3747b1393f. May 16 01:42:37.694943 containerd[1464]: time="2025-05-16T01:42:37.692486484Z" level=info msg="StartContainer for \"aa4ff9322f6252972710984a59c5ca6cdf1bb2a5c66819583887eb3747b1393f\" returns successfully" May 16 01:42:37.702687 systemd[1]: cri-containerd-aa4ff9322f6252972710984a59c5ca6cdf1bb2a5c66819583887eb3747b1393f.scope: Deactivated successfully. May 16 01:42:37.735336 containerd[1464]: time="2025-05-16T01:42:37.735082961Z" level=info msg="shim disconnected" id=aa4ff9322f6252972710984a59c5ca6cdf1bb2a5c66819583887eb3747b1393f namespace=k8s.io May 16 01:42:37.735336 containerd[1464]: time="2025-05-16T01:42:37.735168332Z" level=warning msg="cleaning up after shim disconnected" id=aa4ff9322f6252972710984a59c5ca6cdf1bb2a5c66819583887eb3747b1393f namespace=k8s.io May 16 01:42:37.735336 containerd[1464]: time="2025-05-16T01:42:37.735178551Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:42:38.122013 sshd[4460]: Accepted publickey for core from 172.24.4.1 port 46948 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:42:38.124190 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:42:38.136732 systemd-logind[1441]: New session 29 of user core. May 16 01:42:38.144955 systemd[1]: Started session-29.scope - Session 29 of User core. May 16 01:42:38.533795 containerd[1464]: time="2025-05-16T01:42:38.533161083Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 01:42:38.579968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831069104.mount: Deactivated successfully. May 16 01:42:38.589263 containerd[1464]: time="2025-05-16T01:42:38.589054359Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc33221f35e2dc354b3e0a8bfe4edfc62f9a2292c78b56a7ebba33d6fae3dbc6\"" May 16 01:42:38.595009 containerd[1464]: time="2025-05-16T01:42:38.594919476Z" level=info msg="StartContainer for \"bc33221f35e2dc354b3e0a8bfe4edfc62f9a2292c78b56a7ebba33d6fae3dbc6\"" May 16 01:42:38.665859 systemd[1]: Started cri-containerd-bc33221f35e2dc354b3e0a8bfe4edfc62f9a2292c78b56a7ebba33d6fae3dbc6.scope - libcontainer container bc33221f35e2dc354b3e0a8bfe4edfc62f9a2292c78b56a7ebba33d6fae3dbc6. May 16 01:42:38.718574 systemd[1]: cri-containerd-bc33221f35e2dc354b3e0a8bfe4edfc62f9a2292c78b56a7ebba33d6fae3dbc6.scope: Deactivated successfully. May 16 01:42:38.720434 containerd[1464]: time="2025-05-16T01:42:38.720059410Z" level=info msg="StartContainer for \"bc33221f35e2dc354b3e0a8bfe4edfc62f9a2292c78b56a7ebba33d6fae3dbc6\" returns successfully" May 16 01:42:38.766543 containerd[1464]: time="2025-05-16T01:42:38.766441312Z" level=info msg="shim disconnected" id=bc33221f35e2dc354b3e0a8bfe4edfc62f9a2292c78b56a7ebba33d6fae3dbc6 namespace=k8s.io May 16 01:42:38.766543 containerd[1464]: time="2025-05-16T01:42:38.766526942Z" level=warning msg="cleaning up after shim disconnected" id=bc33221f35e2dc354b3e0a8bfe4edfc62f9a2292c78b56a7ebba33d6fae3dbc6 namespace=k8s.io May 16 01:42:38.766543 containerd[1464]: time="2025-05-16T01:42:38.766538405Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:42:38.801386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc33221f35e2dc354b3e0a8bfe4edfc62f9a2292c78b56a7ebba33d6fae3dbc6-rootfs.mount: Deactivated successfully. May 16 01:42:38.942290 sshd[4629]: Connection closed by 172.24.4.1 port 46948 May 16 01:42:38.943027 sshd-session[4460]: pam_unix(sshd:session): session closed for user core May 16 01:42:38.959094 systemd[1]: sshd@26-172.24.4.31:22-172.24.4.1:46948.service: Deactivated successfully. May 16 01:42:38.963130 systemd[1]: session-29.scope: Deactivated successfully. May 16 01:42:38.967941 systemd-logind[1441]: Session 29 logged out. Waiting for processes to exit. May 16 01:42:38.976188 systemd[1]: Started sshd@27-172.24.4.31:22-172.24.4.1:46954.service - OpenSSH per-connection server daemon (172.24.4.1:46954). May 16 01:42:38.981905 systemd-logind[1441]: Removed session 29. May 16 01:42:39.321624 kubelet[2664]: E0516 01:42:39.321492 2664 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 01:42:39.544374 containerd[1464]: time="2025-05-16T01:42:39.544000288Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 01:42:39.596166 containerd[1464]: time="2025-05-16T01:42:39.596060358Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb4dbd7d559352585bbc0c310cbf9650e9350547d392af47e2a18e92007ce97e\"" May 16 01:42:39.601667 containerd[1464]: time="2025-05-16T01:42:39.599202535Z" level=info msg="StartContainer for \"cb4dbd7d559352585bbc0c310cbf9650e9350547d392af47e2a18e92007ce97e\"" May 16 01:42:39.656867 systemd[1]: Started cri-containerd-cb4dbd7d559352585bbc0c310cbf9650e9350547d392af47e2a18e92007ce97e.scope - libcontainer container cb4dbd7d559352585bbc0c310cbf9650e9350547d392af47e2a18e92007ce97e. May 16 01:42:39.686706 systemd[1]: cri-containerd-cb4dbd7d559352585bbc0c310cbf9650e9350547d392af47e2a18e92007ce97e.scope: Deactivated successfully. May 16 01:42:39.695681 containerd[1464]: time="2025-05-16T01:42:39.695500121Z" level=info msg="StartContainer for \"cb4dbd7d559352585bbc0c310cbf9650e9350547d392af47e2a18e92007ce97e\" returns successfully" May 16 01:42:39.729569 containerd[1464]: time="2025-05-16T01:42:39.729320708Z" level=info msg="shim disconnected" id=cb4dbd7d559352585bbc0c310cbf9650e9350547d392af47e2a18e92007ce97e namespace=k8s.io May 16 01:42:39.729569 containerd[1464]: time="2025-05-16T01:42:39.729384790Z" level=warning msg="cleaning up after shim disconnected" id=cb4dbd7d559352585bbc0c310cbf9650e9350547d392af47e2a18e92007ce97e namespace=k8s.io May 16 01:42:39.729569 containerd[1464]: time="2025-05-16T01:42:39.729395730Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 01:42:39.803894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb4dbd7d559352585bbc0c310cbf9650e9350547d392af47e2a18e92007ce97e-rootfs.mount: Deactivated successfully. May 16 01:42:40.233771 sshd[4695]: Accepted publickey for core from 172.24.4.1 port 46954 ssh2: RSA SHA256:hbqBC1N3eVnarOblqHTsu5pd4fHUzAJAaCt0vGQ0ke0 May 16 01:42:40.236941 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:42:40.250745 systemd-logind[1441]: New session 30 of user core. May 16 01:42:40.261973 systemd[1]: Started session-30.scope - Session 30 of User core. May 16 01:42:40.556346 containerd[1464]: time="2025-05-16T01:42:40.556135491Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 01:42:40.630617 containerd[1464]: time="2025-05-16T01:42:40.623181647Z" level=info msg="CreateContainer within sandbox \"33789fdb556416aeed6c2af15cb0afd13d09dff324144033091b4a6da26c5d59\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6b2ec8791aa1a0d77b9d185ec58e461ebdd7149e6e17e447a7cb624b207866a\"" May 16 01:42:40.630617 containerd[1464]: time="2025-05-16T01:42:40.624864376Z" level=info msg="StartContainer for \"c6b2ec8791aa1a0d77b9d185ec58e461ebdd7149e6e17e447a7cb624b207866a\"" May 16 01:42:40.697852 systemd[1]: Started cri-containerd-c6b2ec8791aa1a0d77b9d185ec58e461ebdd7149e6e17e447a7cb624b207866a.scope - libcontainer container c6b2ec8791aa1a0d77b9d185ec58e461ebdd7149e6e17e447a7cb624b207866a. May 16 01:42:40.768049 containerd[1464]: time="2025-05-16T01:42:40.767989158Z" level=info msg="StartContainer for \"c6b2ec8791aa1a0d77b9d185ec58e461ebdd7149e6e17e447a7cb624b207866a\" returns successfully" May 16 01:42:41.201673 kernel: cryptd: max_cpu_qlen set to 1000 May 16 01:42:41.257781 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 16 01:42:41.597923 kubelet[2664]: I0516 01:42:41.597668 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fv6tf" podStartSLOduration=5.595732578 podStartE2EDuration="5.595732578s" podCreationTimestamp="2025-05-16 01:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 01:42:41.59350306 +0000 UTC m=+322.748837858" watchObservedRunningTime="2025-05-16 01:42:41.595732578 +0000 UTC m=+322.751067416" May 16 01:42:44.620629 systemd-networkd[1365]: lxc_health: Link UP May 16 01:42:44.642157 systemd-networkd[1365]: lxc_health: Gained carrier May 16 01:42:45.260074 systemd[1]: run-containerd-runc-k8s.io-c6b2ec8791aa1a0d77b9d185ec58e461ebdd7149e6e17e447a7cb624b207866a-runc.Brqtre.mount: Deactivated successfully. May 16 01:42:46.001883 systemd-networkd[1365]: lxc_health: Gained IPv6LL May 16 01:42:47.559701 systemd[1]: run-containerd-runc-k8s.io-c6b2ec8791aa1a0d77b9d185ec58e461ebdd7149e6e17e447a7cb624b207866a-runc.9NwPTQ.mount: Deactivated successfully. May 16 01:42:47.619804 kubelet[2664]: E0516 01:42:47.619732 2664 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49108->127.0.0.1:36015: write tcp 127.0.0.1:49108->127.0.0.1:36015: write: broken pipe May 16 01:42:49.825151 systemd[1]: run-containerd-runc-k8s.io-c6b2ec8791aa1a0d77b9d185ec58e461ebdd7149e6e17e447a7cb624b207866a-runc.Q7ebpl.mount: Deactivated successfully. May 16 01:42:52.129952 kubelet[2664]: E0516 01:42:52.129871 2664 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49116->127.0.0.1:36015: write tcp 127.0.0.1:49116->127.0.0.1:36015: write: broken pipe May 16 01:42:52.488863 sshd[4752]: Connection closed by 172.24.4.1 port 46954 May 16 01:42:52.496691 sshd-session[4695]: pam_unix(sshd:session): session closed for user core May 16 01:42:52.506490 systemd[1]: sshd@27-172.24.4.31:22-172.24.4.1:46954.service: Deactivated successfully. May 16 01:42:52.512759 systemd[1]: session-30.scope: Deactivated successfully. May 16 01:42:52.516381 systemd-logind[1441]: Session 30 logged out. Waiting for processes to exit. May 16 01:42:52.519969 systemd-logind[1441]: Removed session 30. May 16 01:43:19.138158 containerd[1464]: time="2025-05-16T01:43:19.138029207Z" level=info msg="StopPodSandbox for \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\"" May 16 01:43:19.139279 containerd[1464]: time="2025-05-16T01:43:19.138266003Z" level=info msg="TearDown network for sandbox \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\" successfully" May 16 01:43:19.139279 containerd[1464]: time="2025-05-16T01:43:19.138300668Z" level=info msg="StopPodSandbox for \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\" returns successfully" May 16 01:43:19.140111 containerd[1464]: time="2025-05-16T01:43:19.140001565Z" level=info msg="RemovePodSandbox for \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\"" May 16 01:43:19.140210 containerd[1464]: time="2025-05-16T01:43:19.140144113Z" level=info msg="Forcibly stopping sandbox \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\"" May 16 01:43:19.140683 containerd[1464]: time="2025-05-16T01:43:19.140409832Z" level=info msg="TearDown network for sandbox \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\" successfully" May 16 01:43:19.149056 containerd[1464]: time="2025-05-16T01:43:19.148928884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 01:43:19.149309 containerd[1464]: time="2025-05-16T01:43:19.149083695Z" level=info msg="RemovePodSandbox \"bb375f658fead2ee1875c70fda20f610293f083f1e6d20dc81a1e988e969a77d\" returns successfully" May 16 01:43:19.150429 containerd[1464]: time="2025-05-16T01:43:19.150327403Z" level=info msg="StopPodSandbox for \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\"" May 16 01:43:19.150579 containerd[1464]: time="2025-05-16T01:43:19.150535123Z" level=info msg="TearDown network for sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" successfully" May 16 01:43:19.150579 containerd[1464]: time="2025-05-16T01:43:19.150568325Z" level=info msg="StopPodSandbox for \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" returns successfully" May 16 01:43:19.151536 containerd[1464]: time="2025-05-16T01:43:19.151379490Z" level=info msg="RemovePodSandbox for \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\"" May 16 01:43:19.151536 containerd[1464]: time="2025-05-16T01:43:19.151522980Z" level=info msg="Forcibly stopping sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\"" May 16 01:43:19.151886 containerd[1464]: time="2025-05-16T01:43:19.151707556Z" level=info msg="TearDown network for sandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" successfully" May 16 01:43:19.160646 containerd[1464]: time="2025-05-16T01:43:19.158214657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 01:43:19.160646 containerd[1464]: time="2025-05-16T01:43:19.158327499Z" level=info msg="RemovePodSandbox \"4d4ff822c40dac53726d93b05dfee21ecc81b2369df29491b497929459f866f9\" returns successfully" May 16 01:43:50.701184 update_engine[1448]: I20250516 01:43:50.700507 1448 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 16 01:43:50.701184 update_engine[1448]: I20250516 01:43:50.700780 1448 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 16 01:43:50.702719 update_engine[1448]: I20250516 01:43:50.701583 1448 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 16 01:43:50.702825 update_engine[1448]: I20250516 01:43:50.702792 1448 omaha_request_params.cc:62] Current group set to stable May 16 01:43:50.704076 update_engine[1448]: I20250516 01:43:50.703394 1448 update_attempter.cc:499] Already updated boot flags. Skipping. May 16 01:43:50.704076 update_engine[1448]: I20250516 01:43:50.703440 1448 update_attempter.cc:643] Scheduling an action processor start. May 16 01:43:50.704076 update_engine[1448]: I20250516 01:43:50.703498 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 16 01:43:50.704076 update_engine[1448]: I20250516 01:43:50.703851 1448 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 16 01:43:50.704440 update_engine[1448]: I20250516 01:43:50.704062 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled May 16 01:43:50.704440 update_engine[1448]: I20250516 01:43:50.704100 1448 omaha_request_action.cc:272] Request: May 16 01:43:50.704440 update_engine[1448]: May 16 01:43:50.704440 update_engine[1448]: May 16 01:43:50.704440 update_engine[1448]: May 16 01:43:50.704440 update_engine[1448]: May 16 01:43:50.704440 update_engine[1448]: May 16 01:43:50.704440 update_engine[1448]: May 16 01:43:50.704440 update_engine[1448]: May 16 01:43:50.704440 update_engine[1448]: May 16 01:43:50.704440 update_engine[1448]: I20250516 01:43:50.704131 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 01:43:50.710055 update_engine[1448]: I20250516 01:43:50.709378 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 01:43:50.710872 update_engine[1448]: I20250516 01:43:50.710748 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 01:43:50.711320 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 16 01:43:50.718534 update_engine[1448]: E20250516 01:43:50.718415 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 01:43:50.718818 update_engine[1448]: I20250516 01:43:50.718683 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 16 01:44:00.645884 update_engine[1448]: I20250516 01:44:00.645668 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 01:44:00.646783 update_engine[1448]: I20250516 01:44:00.646134 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 01:44:00.646783 update_engine[1448]: I20250516 01:44:00.646692 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 01:44:00.652360 update_engine[1448]: E20250516 01:44:00.652287 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 01:44:00.652533 update_engine[1448]: I20250516 01:44:00.652398 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 16 01:44:10.647776 update_engine[1448]: I20250516 01:44:10.647497 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 01:44:10.649155 update_engine[1448]: I20250516 01:44:10.648232 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 01:44:10.649155 update_engine[1448]: I20250516 01:44:10.649004 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 01:44:10.654506 update_engine[1448]: E20250516 01:44:10.654390 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 01:44:10.654723 update_engine[1448]: I20250516 01:44:10.654505 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 16 01:44:20.645578 update_engine[1448]: I20250516 01:44:20.645349 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 01:44:20.646634 update_engine[1448]: I20250516 01:44:20.646260 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 01:44:20.647096 update_engine[1448]: I20250516 01:44:20.646968 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 01:44:20.652348 update_engine[1448]: E20250516 01:44:20.652194 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 01:44:20.652726 update_engine[1448]: I20250516 01:44:20.652382 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 16 01:44:20.652726 update_engine[1448]: I20250516 01:44:20.652443 1448 omaha_request_action.cc:617] Omaha request response: May 16 01:44:20.652988 update_engine[1448]: E20250516 01:44:20.652878 1448 omaha_request_action.cc:636] Omaha request network transfer failed. May 16 01:44:20.653362 update_engine[1448]: I20250516 01:44:20.653239 1448 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 16 01:44:20.653362 update_engine[1448]: I20250516 01:44:20.653295 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 16 01:44:20.653362 update_engine[1448]: I20250516 01:44:20.653313 1448 update_attempter.cc:306] Processing Done. May 16 01:44:20.653852 update_engine[1448]: E20250516 01:44:20.653384 1448 update_attempter.cc:619] Update failed. May 16 01:44:20.653852 update_engine[1448]: I20250516 01:44:20.653422 1448 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 16 01:44:20.653852 update_engine[1448]: I20250516 01:44:20.653447 1448 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 16 01:44:20.653852 update_engine[1448]: I20250516 01:44:20.653470 1448 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 16 01:44:20.653852 update_engine[1448]: I20250516 01:44:20.653709 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 16 01:44:20.653852 update_engine[1448]: I20250516 01:44:20.653789 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled May 16 01:44:20.653852 update_engine[1448]: I20250516 01:44:20.653810 1448 omaha_request_action.cc:272] Request: May 16 01:44:20.653852 update_engine[1448]: May 16 01:44:20.653852 update_engine[1448]: May 16 01:44:20.653852 update_engine[1448]: May 16 01:44:20.653852 update_engine[1448]: May 16 01:44:20.653852 update_engine[1448]: May 16 01:44:20.653852 update_engine[1448]: May 16 01:44:20.653852 update_engine[1448]: I20250516 01:44:20.653832 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 01:44:20.655061 update_engine[1448]: I20250516 01:44:20.654248 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 01:44:20.655061 update_engine[1448]: I20250516 01:44:20.654853 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 01:44:20.655943 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 16 01:44:20.659974 update_engine[1448]: E20250516 01:44:20.659884 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 01:44:20.660108 update_engine[1448]: I20250516 01:44:20.659994 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 16 01:44:20.660108 update_engine[1448]: I20250516 01:44:20.660015 1448 omaha_request_action.cc:617] Omaha request response: May 16 01:44:20.660108 update_engine[1448]: I20250516 01:44:20.660030 1448 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 16 01:44:20.660108 update_engine[1448]: I20250516 01:44:20.660043 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 16 01:44:20.660108 update_engine[1448]: I20250516 01:44:20.660053 1448 update_attempter.cc:306] Processing Done. May 16 01:44:20.660108 update_engine[1448]: I20250516 01:44:20.660066 1448 update_attempter.cc:310] Error event sent. May 16 01:44:20.660548 update_engine[1448]: I20250516 01:44:20.660104 1448 update_check_scheduler.cc:74] Next update check in 43m34s May 16 01:44:20.661007 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0