May 15 05:02:17.040104 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:18:55 -00 2025 May 15 05:02:17.040131 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=676605e5288ab6a23835eefe0cbb74879b800df0a2a85ac0781041b13f2d6bba May 15 05:02:17.040142 kernel: BIOS-provided physical RAM map: May 15 05:02:17.040150 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 05:02:17.040158 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 05:02:17.040169 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 05:02:17.040177 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 15 05:02:17.040185 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 15 05:02:17.040193 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 05:02:17.040201 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 05:02:17.040209 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 15 05:02:17.040217 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 05:02:17.040224 kernel: NX (Execute Disable) protection: active May 15 05:02:17.040234 kernel: APIC: Static calls initialized May 15 05:02:17.040244 kernel: SMBIOS 3.0.0 present. May 15 05:02:17.040252 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 15 05:02:17.040260 kernel: Hypervisor detected: KVM May 15 05:02:17.040268 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 05:02:17.040276 kernel: kvm-clock: using sched offset of 3548777029 cycles May 15 05:02:17.040287 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 05:02:17.040296 kernel: tsc: Detected 1996.249 MHz processor May 15 05:02:17.040304 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 05:02:17.040313 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 05:02:17.040322 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 15 05:02:17.040330 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 05:02:17.040584 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 05:02:17.040600 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 15 05:02:17.040609 kernel: ACPI: Early table checksum verification disabled May 15 05:02:17.040622 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 15 05:02:17.040633 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 05:02:17.040642 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 05:02:17.040652 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 05:02:17.040662 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 15 05:02:17.040671 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 05:02:17.040681 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 05:02:17.040691 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 15 05:02:17.040702 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 15 05:02:17.040710 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 15 05:02:17.040719 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 15 05:02:17.040727 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 15 05:02:17.040740 kernel: No NUMA configuration found May 15 05:02:17.040749 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 15 05:02:17.040759 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] May 15 05:02:17.040769 kernel: Zone ranges: May 15 05:02:17.040778 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 05:02:17.040786 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 05:02:17.040794 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 15 05:02:17.040802 kernel: Movable zone start for each node May 15 05:02:17.040810 kernel: Early memory node ranges May 15 05:02:17.040818 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 05:02:17.040826 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 15 05:02:17.040836 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 15 05:02:17.040844 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 15 05:02:17.040853 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 05:02:17.040861 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 05:02:17.040869 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 05:02:17.040877 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 05:02:17.040885 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 05:02:17.040893 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 05:02:17.040902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 05:02:17.040912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 05:02:17.040920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 05:02:17.040928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 05:02:17.040936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 05:02:17.040944 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 05:02:17.040952 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 15 05:02:17.040961 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 05:02:17.040969 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 15 05:02:17.040977 kernel: Booting paravirtualized kernel on KVM May 15 05:02:17.040987 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 05:02:17.040996 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 05:02:17.041004 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 15 05:02:17.041012 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 15 05:02:17.041020 kernel: pcpu-alloc: [0] 0 1 May 15 05:02:17.041028 kernel: kvm-guest: PV spinlocks disabled, no host support May 15 05:02:17.041038 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=676605e5288ab6a23835eefe0cbb74879b800df0a2a85ac0781041b13f2d6bba May 15 05:02:17.041047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 05:02:17.041057 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 05:02:17.041065 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 05:02:17.041073 kernel: Fallback order for Node 0: 0 May 15 05:02:17.041081 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 15 05:02:17.041089 kernel: Policy zone: Normal May 15 05:02:17.041098 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 05:02:17.041106 kernel: software IO TLB: area num 2. May 15 05:02:17.041114 kernel: Memory: 3966216K/4193772K available (12288K kernel code, 2295K rwdata, 22752K rodata, 43000K init, 2192K bss, 227296K reserved, 0K cma-reserved) May 15 05:02:17.041123 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 05:02:17.041133 kernel: ftrace: allocating 37946 entries in 149 pages May 15 05:02:17.041141 kernel: ftrace: allocated 149 pages with 4 groups May 15 05:02:17.041150 kernel: Dynamic Preempt: voluntary May 15 05:02:17.041158 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 05:02:17.041169 kernel: rcu: RCU event tracing is enabled. May 15 05:02:17.041178 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 05:02:17.041186 kernel: Trampoline variant of Tasks RCU enabled. May 15 05:02:17.041194 kernel: Rude variant of Tasks RCU enabled. May 15 05:02:17.041203 kernel: Tracing variant of Tasks RCU enabled. May 15 05:02:17.041214 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 05:02:17.041222 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 05:02:17.041230 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 05:02:17.041238 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 05:02:17.041246 kernel: Console: colour VGA+ 80x25 May 15 05:02:17.041255 kernel: printk: console [tty0] enabled May 15 05:02:17.041263 kernel: printk: console [ttyS0] enabled May 15 05:02:17.041271 kernel: ACPI: Core revision 20230628 May 15 05:02:17.041279 kernel: APIC: Switch to symmetric I/O mode setup May 15 05:02:17.041289 kernel: x2apic enabled May 15 05:02:17.041297 kernel: APIC: Switched APIC routing to: physical x2apic May 15 05:02:17.041306 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 05:02:17.041314 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 05:02:17.041322 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 15 05:02:17.041330 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 15 05:02:17.041356 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 15 05:02:17.041366 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 05:02:17.041374 kernel: Spectre V2 : Mitigation: Retpolines May 15 05:02:17.041385 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 05:02:17.041393 kernel: Speculative Store Bypass: Vulnerable May 15 05:02:17.041401 kernel: x86/fpu: x87 FPU will use FXSAVE May 15 05:02:17.041409 kernel: Freeing SMP alternatives memory: 32K May 15 05:02:17.041418 kernel: pid_max: default: 32768 minimum: 301 May 15 05:02:17.041433 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 05:02:17.041443 kernel: landlock: Up and running. May 15 05:02:17.041452 kernel: SELinux: Initializing. May 15 05:02:17.041461 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 05:02:17.041469 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 05:02:17.041478 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 15 05:02:17.041487 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 05:02:17.041498 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 05:02:17.041507 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 05:02:17.041515 kernel: Performance Events: AMD PMU driver. May 15 05:02:17.041524 kernel: ... version: 0 May 15 05:02:17.041533 kernel: ... bit width: 48 May 15 05:02:17.041543 kernel: ... generic registers: 4 May 15 05:02:17.041552 kernel: ... value mask: 0000ffffffffffff May 15 05:02:17.041561 kernel: ... max period: 00007fffffffffff May 15 05:02:17.041569 kernel: ... fixed-purpose events: 0 May 15 05:02:17.041578 kernel: ... event mask: 000000000000000f May 15 05:02:17.041586 kernel: signal: max sigframe size: 1440 May 15 05:02:17.041595 kernel: rcu: Hierarchical SRCU implementation. May 15 05:02:17.041604 kernel: rcu: Max phase no-delay instances is 400. May 15 05:02:17.041612 kernel: smp: Bringing up secondary CPUs ... May 15 05:02:17.041623 kernel: smpboot: x86: Booting SMP configuration: May 15 05:02:17.041632 kernel: .... node #0, CPUs: #1 May 15 05:02:17.041640 kernel: smp: Brought up 1 node, 2 CPUs May 15 05:02:17.041649 kernel: smpboot: Max logical packages: 2 May 15 05:02:17.041658 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 15 05:02:17.041666 kernel: devtmpfs: initialized May 15 05:02:17.041675 kernel: x86/mm: Memory block size: 128MB May 15 05:02:17.041684 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 05:02:17.041692 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 05:02:17.041703 kernel: pinctrl core: initialized pinctrl subsystem May 15 05:02:17.041712 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 05:02:17.041720 kernel: audit: initializing netlink subsys (disabled) May 15 05:02:17.041729 kernel: audit: type=2000 audit(1747285336.166:1): state=initialized audit_enabled=0 res=1 May 15 05:02:17.041738 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 05:02:17.041747 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 05:02:17.041755 kernel: cpuidle: using governor menu May 15 05:02:17.041764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 05:02:17.041772 kernel: dca service started, version 1.12.1 May 15 05:02:17.041784 kernel: PCI: Using configuration type 1 for base access May 15 05:02:17.041793 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 05:02:17.041801 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 05:02:17.041810 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 05:02:17.041819 kernel: ACPI: Added _OSI(Module Device) May 15 05:02:17.041827 kernel: ACPI: Added _OSI(Processor Device) May 15 05:02:17.041836 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 05:02:17.041844 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 05:02:17.041853 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 05:02:17.041864 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 05:02:17.041872 kernel: ACPI: Interpreter enabled May 15 05:02:17.041881 kernel: ACPI: PM: (supports S0 S3 S5) May 15 05:02:17.041889 kernel: ACPI: Using IOAPIC for interrupt routing May 15 05:02:17.041898 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 05:02:17.041907 kernel: PCI: Using E820 reservations for host bridge windows May 15 05:02:17.041915 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 15 05:02:17.041924 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 05:02:17.042064 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 15 05:02:17.042165 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 15 05:02:17.044859 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 15 05:02:17.044878 kernel: acpiphp: Slot [3] registered May 15 05:02:17.044889 kernel: acpiphp: Slot [4] registered May 15 05:02:17.044898 kernel: acpiphp: Slot [5] registered May 15 05:02:17.044907 kernel: acpiphp: Slot [6] registered May 15 05:02:17.044916 kernel: acpiphp: Slot [7] registered May 15 05:02:17.044924 kernel: acpiphp: Slot [8] registered May 15 05:02:17.044937 kernel: acpiphp: Slot [9] registered May 15 05:02:17.044946 kernel: acpiphp: Slot [10] registered May 15 05:02:17.044954 kernel: acpiphp: Slot [11] registered May 15 05:02:17.044963 kernel: acpiphp: Slot [12] registered May 15 05:02:17.044971 kernel: acpiphp: Slot [13] registered May 15 05:02:17.044980 kernel: acpiphp: Slot [14] registered May 15 05:02:17.044988 kernel: acpiphp: Slot [15] registered May 15 05:02:17.044997 kernel: acpiphp: Slot [16] registered May 15 05:02:17.045005 kernel: acpiphp: Slot [17] registered May 15 05:02:17.045016 kernel: acpiphp: Slot [18] registered May 15 05:02:17.045025 kernel: acpiphp: Slot [19] registered May 15 05:02:17.045033 kernel: acpiphp: Slot [20] registered May 15 05:02:17.045042 kernel: acpiphp: Slot [21] registered May 15 05:02:17.045050 kernel: acpiphp: Slot [22] registered May 15 05:02:17.045058 kernel: acpiphp: Slot [23] registered May 15 05:02:17.045067 kernel: acpiphp: Slot [24] registered May 15 05:02:17.045075 kernel: acpiphp: Slot [25] registered May 15 05:02:17.045084 kernel: acpiphp: Slot [26] registered May 15 05:02:17.045094 kernel: acpiphp: Slot [27] registered May 15 05:02:17.045103 kernel: acpiphp: Slot [28] registered May 15 05:02:17.045111 kernel: acpiphp: Slot [29] registered May 15 05:02:17.045119 kernel: acpiphp: Slot [30] registered May 15 05:02:17.045128 kernel: acpiphp: Slot [31] registered May 15 05:02:17.045136 kernel: PCI host bridge to bus 0000:00 May 15 05:02:17.045236 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 05:02:17.045324 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 05:02:17.045431 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 05:02:17.045516 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 05:02:17.045594 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 15 05:02:17.045674 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 05:02:17.045787 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 15 05:02:17.045889 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 15 05:02:17.045995 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 15 05:02:17.046092 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 15 05:02:17.046183 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 15 05:02:17.046274 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 15 05:02:17.048039 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 15 05:02:17.048147 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 15 05:02:17.048259 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 15 05:02:17.048389 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 15 05:02:17.048503 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 15 05:02:17.048611 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 15 05:02:17.048712 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 15 05:02:17.048814 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 15 05:02:17.048907 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 15 05:02:17.048998 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 15 05:02:17.049098 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 05:02:17.049200 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 15 05:02:17.049294 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 15 05:02:17.049405 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 15 05:02:17.049497 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 15 05:02:17.049590 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 15 05:02:17.049692 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 15 05:02:17.049791 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 15 05:02:17.049885 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 15 05:02:17.049980 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 15 05:02:17.050081 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 15 05:02:17.050175 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 15 05:02:17.050268 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 15 05:02:17.056452 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 15 05:02:17.056575 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 15 05:02:17.056675 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 15 05:02:17.056774 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 15 05:02:17.056789 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 05:02:17.056799 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 05:02:17.056808 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 05:02:17.056817 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 05:02:17.056826 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 15 05:02:17.056838 kernel: iommu: Default domain type: Translated May 15 05:02:17.056847 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 05:02:17.056856 kernel: PCI: Using ACPI for IRQ routing May 15 05:02:17.056865 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 05:02:17.056873 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 05:02:17.056882 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 15 05:02:17.056972 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 15 05:02:17.057065 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 15 05:02:17.057156 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 05:02:17.057173 kernel: vgaarb: loaded May 15 05:02:17.057182 kernel: clocksource: Switched to clocksource kvm-clock May 15 05:02:17.057191 kernel: VFS: Disk quotas dquot_6.6.0 May 15 05:02:17.057200 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 05:02:17.057209 kernel: pnp: PnP ACPI init May 15 05:02:17.057311 kernel: pnp 00:03: [dma 2] May 15 05:02:17.057327 kernel: pnp: PnP ACPI: found 5 devices May 15 05:02:17.057352 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 05:02:17.057367 kernel: NET: Registered PF_INET protocol family May 15 05:02:17.057378 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 05:02:17.057393 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 05:02:17.057404 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 05:02:17.057415 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 05:02:17.057425 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 05:02:17.057436 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 05:02:17.057446 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 05:02:17.057456 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 05:02:17.057469 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 05:02:17.057479 kernel: NET: Registered PF_XDP protocol family May 15 05:02:17.057569 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 05:02:17.057650 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 05:02:17.057729 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 05:02:17.057808 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 15 05:02:17.057887 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 15 05:02:17.057982 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 15 05:02:17.058082 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 05:02:17.058097 kernel: PCI: CLS 0 bytes, default 64 May 15 05:02:17.058106 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 05:02:17.058115 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 15 05:02:17.058124 kernel: Initialise system trusted keyrings May 15 05:02:17.058133 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 05:02:17.058142 kernel: Key type asymmetric registered May 15 05:02:17.058151 kernel: Asymmetric key parser 'x509' registered May 15 05:02:17.058159 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 05:02:17.058172 kernel: io scheduler mq-deadline registered May 15 05:02:17.058180 kernel: io scheduler kyber registered May 15 05:02:17.058189 kernel: io scheduler bfq registered May 15 05:02:17.058198 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 05:02:17.058208 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 15 05:02:17.058217 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 15 05:02:17.058225 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 15 05:02:17.058234 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 15 05:02:17.058243 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 05:02:17.058255 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 05:02:17.058264 kernel: random: crng init done May 15 05:02:17.058273 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 05:02:17.058282 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 05:02:17.058290 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 05:02:17.060011 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 05:02:17.060030 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 05:02:17.060116 kernel: rtc_cmos 00:04: registered as rtc0 May 15 05:02:17.060209 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T05:02:16 UTC (1747285336) May 15 05:02:17.060295 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 15 05:02:17.060310 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 05:02:17.060319 kernel: NET: Registered PF_INET6 protocol family May 15 05:02:17.060328 kernel: Segment Routing with IPv6 May 15 05:02:17.062549 kernel: In-situ OAM (IOAM) with IPv6 May 15 05:02:17.062565 kernel: NET: Registered PF_PACKET protocol family May 15 05:02:17.062574 kernel: Key type dns_resolver registered May 15 05:02:17.062583 kernel: IPI shorthand broadcast: enabled May 15 05:02:17.062596 kernel: sched_clock: Marking stable (1035006902, 178432303)->(1245423331, -31984126) May 15 05:02:17.062605 kernel: registered taskstats version 1 May 15 05:02:17.062614 kernel: Loading compiled-in X.509 certificates May 15 05:02:17.062623 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 24318a9a7bb74dcc18d1d3d4ac63358025b8c253' May 15 05:02:17.062632 kernel: Key type .fscrypt registered May 15 05:02:17.062641 kernel: Key type fscrypt-provisioning registered May 15 05:02:17.062650 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 05:02:17.062659 kernel: ima: Allocated hash algorithm: sha1 May 15 05:02:17.062670 kernel: ima: No architecture policies found May 15 05:02:17.062678 kernel: clk: Disabling unused clocks May 15 05:02:17.062688 kernel: Freeing unused kernel image (initmem) memory: 43000K May 15 05:02:17.062696 kernel: Write protecting the kernel read-only data: 36864k May 15 05:02:17.062705 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 15 05:02:17.062714 kernel: Run /init as init process May 15 05:02:17.062723 kernel: with arguments: May 15 05:02:17.062731 kernel: /init May 15 05:02:17.062740 kernel: with environment: May 15 05:02:17.062749 kernel: HOME=/ May 15 05:02:17.062759 kernel: TERM=linux May 15 05:02:17.062768 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 05:02:17.062780 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 05:02:17.062793 systemd[1]: Detected virtualization kvm. May 15 05:02:17.062803 systemd[1]: Detected architecture x86-64. May 15 05:02:17.062812 systemd[1]: Running in initrd. May 15 05:02:17.062821 systemd[1]: No hostname configured, using default hostname. May 15 05:02:17.062833 systemd[1]: Hostname set to . May 15 05:02:17.062843 systemd[1]: Initializing machine ID from VM UUID. May 15 05:02:17.062852 systemd[1]: Queued start job for default target initrd.target. May 15 05:02:17.062862 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 05:02:17.062871 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 05:02:17.062881 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 05:02:17.062891 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 05:02:17.062912 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 05:02:17.062925 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 05:02:17.062936 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 05:02:17.062946 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 05:02:17.062956 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 05:02:17.062968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 05:02:17.062978 systemd[1]: Reached target paths.target - Path Units. May 15 05:02:17.062988 systemd[1]: Reached target slices.target - Slice Units. May 15 05:02:17.062997 systemd[1]: Reached target swap.target - Swaps. May 15 05:02:17.063007 systemd[1]: Reached target timers.target - Timer Units. May 15 05:02:17.063017 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 05:02:17.063027 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 05:02:17.063036 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 05:02:17.063046 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 05:02:17.063058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 05:02:17.063068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 05:02:17.063078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 05:02:17.063087 systemd[1]: Reached target sockets.target - Socket Units. May 15 05:02:17.063097 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 05:02:17.063107 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 05:02:17.063117 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 05:02:17.063127 systemd[1]: Starting systemd-fsck-usr.service... May 15 05:02:17.063139 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 05:02:17.063149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 05:02:17.063181 systemd-journald[185]: Collecting audit messages is disabled. May 15 05:02:17.063205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 05:02:17.063219 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 05:02:17.063229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 05:02:17.063238 systemd[1]: Finished systemd-fsck-usr.service. May 15 05:02:17.063249 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 05:02:17.063261 systemd-journald[185]: Journal started May 15 05:02:17.063285 systemd-journald[185]: Runtime Journal (/run/log/journal/f1ac09b8d995455883619f2a6726a479) is 8.0M, max 78.3M, 70.3M free. May 15 05:02:17.065896 systemd-modules-load[186]: Inserted module 'overlay' May 15 05:02:17.104587 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 05:02:17.104614 kernel: Bridge firewalling registered May 15 05:02:17.104628 systemd[1]: Started systemd-journald.service - Journal Service. May 15 05:02:17.100876 systemd-modules-load[186]: Inserted module 'br_netfilter' May 15 05:02:17.107669 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 05:02:17.108355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 05:02:17.114539 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 05:02:17.116228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 05:02:17.124936 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 05:02:17.126600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 05:02:17.136692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 05:02:17.139935 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 05:02:17.141947 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 05:02:17.146518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 05:02:17.147208 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 05:02:17.148022 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 05:02:17.155529 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 05:02:17.169688 dracut-cmdline[220]: dracut-dracut-053 May 15 05:02:17.176226 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=676605e5288ab6a23835eefe0cbb74879b800df0a2a85ac0781041b13f2d6bba May 15 05:02:17.184025 systemd-resolved[218]: Positive Trust Anchors: May 15 05:02:17.184044 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 05:02:17.184085 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 05:02:17.188510 systemd-resolved[218]: Defaulting to hostname 'linux'. May 15 05:02:17.189413 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 05:02:17.190682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 05:02:17.259412 kernel: SCSI subsystem initialized May 15 05:02:17.271405 kernel: Loading iSCSI transport class v2.0-870. May 15 05:02:17.283404 kernel: iscsi: registered transport (tcp) May 15 05:02:17.306671 kernel: iscsi: registered transport (qla4xxx) May 15 05:02:17.306738 kernel: QLogic iSCSI HBA Driver May 15 05:02:17.368409 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 05:02:17.377604 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 05:02:17.432062 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 05:02:17.432177 kernel: device-mapper: uevent: version 1.0.3 May 15 05:02:17.433737 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 05:02:17.497473 kernel: raid6: sse2x4 gen() 5159 MB/s May 15 05:02:17.516474 kernel: raid6: sse2x2 gen() 5778 MB/s May 15 05:02:17.534848 kernel: raid6: sse2x1 gen() 10113 MB/s May 15 05:02:17.534919 kernel: raid6: using algorithm sse2x1 gen() 10113 MB/s May 15 05:02:17.553795 kernel: raid6: .... xor() 7311 MB/s, rmw enabled May 15 05:02:17.553854 kernel: raid6: using ssse3x2 recovery algorithm May 15 05:02:17.576851 kernel: xor: measuring software checksum speed May 15 05:02:17.576897 kernel: prefetch64-sse : 18527 MB/sec May 15 05:02:17.578145 kernel: generic_sse : 16812 MB/sec May 15 05:02:17.578190 kernel: xor: using function: prefetch64-sse (18527 MB/sec) May 15 05:02:17.758828 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 05:02:17.777541 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 05:02:17.786607 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 05:02:17.801419 systemd-udevd[404]: Using default interface naming scheme 'v255'. May 15 05:02:17.805799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 05:02:17.814605 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 05:02:17.835606 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation May 15 05:02:17.876160 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 05:02:17.884632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 05:02:17.932078 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 05:02:17.942710 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 05:02:17.989928 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 05:02:17.992399 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 05:02:17.993251 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 05:02:17.993807 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 05:02:17.998553 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 05:02:18.011894 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 05:02:18.027544 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 15 05:02:18.037423 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 15 05:02:18.040417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 05:02:18.040549 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 05:02:18.042679 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 05:02:18.043848 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 05:02:18.044218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 05:02:18.056883 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 05:02:18.056903 kernel: GPT:17805311 != 20971519 May 15 05:02:18.056915 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 05:02:18.056926 kernel: GPT:17805311 != 20971519 May 15 05:02:18.056937 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 05:02:18.056948 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 05:02:18.046763 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 05:02:18.059368 kernel: libata version 3.00 loaded. May 15 05:02:18.061385 kernel: ata_piix 0000:00:01.1: version 2.13 May 15 05:02:18.061715 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 05:02:18.069509 kernel: scsi host0: ata_piix May 15 05:02:18.072565 kernel: scsi host1: ata_piix May 15 05:02:18.072723 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 15 05:02:18.074566 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 15 05:02:18.102393 kernel: BTRFS: device fsid 588f8840-d63c-4068-b03d-1642b4e6460f devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (453) May 15 05:02:18.107295 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 05:02:18.147674 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) May 15 05:02:18.148532 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 05:02:18.167243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 05:02:18.171900 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 05:02:18.172589 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 05:02:18.179868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 05:02:18.183636 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 05:02:18.188483 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 05:02:18.211923 disk-uuid[502]: Primary Header is updated. May 15 05:02:18.211923 disk-uuid[502]: Secondary Entries is updated. May 15 05:02:18.211923 disk-uuid[502]: Secondary Header is updated. May 15 05:02:18.226817 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 05:02:18.253894 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 05:02:19.239433 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 05:02:19.239586 disk-uuid[503]: The operation has completed successfully. May 15 05:02:19.313015 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 05:02:19.313228 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 05:02:19.331632 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 05:02:19.338535 sh[523]: Success May 15 05:02:19.369390 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 15 05:02:19.437915 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 05:02:19.451547 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 05:02:19.453089 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 05:02:19.489416 kernel: BTRFS info (device dm-0): first mount of filesystem 588f8840-d63c-4068-b03d-1642b4e6460f May 15 05:02:19.489497 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 05:02:19.489532 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 05:02:19.491148 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 05:02:19.494489 kernel: BTRFS info (device dm-0): using free space tree May 15 05:02:19.511930 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 05:02:19.513232 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 05:02:19.520512 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 05:02:19.525280 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 05:02:19.548270 kernel: BTRFS info (device vda6): first mount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 05:02:19.548332 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 05:02:19.552607 kernel: BTRFS info (device vda6): using free space tree May 15 05:02:19.566447 kernel: BTRFS info (device vda6): auto enabling async discard May 15 05:02:19.580321 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 05:02:19.583228 kernel: BTRFS info (device vda6): last unmount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 05:02:19.599924 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 05:02:19.610725 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 05:02:19.660696 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 05:02:19.671590 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 05:02:19.693906 systemd-networkd[706]: lo: Link UP May 15 05:02:19.693919 systemd-networkd[706]: lo: Gained carrier May 15 05:02:19.695218 systemd-networkd[706]: Enumeration completed May 15 05:02:19.696069 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 05:02:19.696073 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 05:02:19.700048 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 05:02:19.700791 systemd[1]: Reached target network.target - Network. May 15 05:02:19.701143 systemd-networkd[706]: eth0: Link UP May 15 05:02:19.701150 systemd-networkd[706]: eth0: Gained carrier May 15 05:02:19.701163 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 05:02:19.723456 systemd-networkd[706]: eth0: DHCPv4 address 172.24.4.5/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 15 05:02:19.776460 ignition[640]: Ignition 2.20.0 May 15 05:02:19.776472 ignition[640]: Stage: fetch-offline May 15 05:02:19.778664 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 05:02:19.776508 ignition[640]: no configs at "/usr/lib/ignition/base.d" May 15 05:02:19.776518 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 05:02:19.776607 ignition[640]: parsed url from cmdline: "" May 15 05:02:19.776611 ignition[640]: no config URL provided May 15 05:02:19.776616 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" May 15 05:02:19.776624 ignition[640]: no config at "/usr/lib/ignition/user.ign" May 15 05:02:19.776629 ignition[640]: failed to fetch config: resource requires networking May 15 05:02:19.776819 ignition[640]: Ignition finished successfully May 15 05:02:19.799795 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 05:02:19.813476 ignition[717]: Ignition 2.20.0 May 15 05:02:19.813490 ignition[717]: Stage: fetch May 15 05:02:19.813697 ignition[717]: no configs at "/usr/lib/ignition/base.d" May 15 05:02:19.813709 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 05:02:19.813811 ignition[717]: parsed url from cmdline: "" May 15 05:02:19.813815 ignition[717]: no config URL provided May 15 05:02:19.813821 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" May 15 05:02:19.813830 ignition[717]: no config at "/usr/lib/ignition/user.ign" May 15 05:02:19.813930 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 15 05:02:19.814020 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 15 05:02:19.814049 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 15 05:02:20.069262 ignition[717]: GET result: OK May 15 05:02:20.069534 ignition[717]: parsing config with SHA512: 7792152724da1e3d76536b2595af426a20768ad83427627da7bf84fc4e7979a4586ee9673f2c620667accb1f562fc2bff5422db74f0a21792684296cd45c638b May 15 05:02:20.081787 unknown[717]: fetched base config from "system" May 15 05:02:20.081817 unknown[717]: fetched base config from "system" May 15 05:02:20.082850 ignition[717]: fetch: fetch complete May 15 05:02:20.081832 unknown[717]: fetched user config from "openstack" May 15 05:02:20.082863 ignition[717]: fetch: fetch passed May 15 05:02:20.086862 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 05:02:20.082950 ignition[717]: Ignition finished successfully May 15 05:02:20.096714 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 05:02:20.140027 ignition[724]: Ignition 2.20.0 May 15 05:02:20.140058 ignition[724]: Stage: kargs May 15 05:02:20.140570 ignition[724]: no configs at "/usr/lib/ignition/base.d" May 15 05:02:20.140603 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 05:02:20.143192 ignition[724]: kargs: kargs passed May 15 05:02:20.145853 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 05:02:20.143301 ignition[724]: Ignition finished successfully May 15 05:02:20.156788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 05:02:20.176737 ignition[730]: Ignition 2.20.0 May 15 05:02:20.176750 ignition[730]: Stage: disks May 15 05:02:20.176977 ignition[730]: no configs at "/usr/lib/ignition/base.d" May 15 05:02:20.179131 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 05:02:20.177000 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 05:02:20.180727 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 05:02:20.178000 ignition[730]: disks: disks passed May 15 05:02:20.182035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 05:02:20.178051 ignition[730]: Ignition finished successfully May 15 05:02:20.183945 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 05:02:20.186149 systemd[1]: Reached target sysinit.target - System Initialization. May 15 05:02:20.189117 systemd[1]: Reached target basic.target - Basic System. May 15 05:02:20.197587 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 05:02:20.223699 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 15 05:02:20.235877 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 05:02:20.242712 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 05:02:20.375387 kernel: EXT4-fs (vda9): mounted filesystem f97506c4-898a-43e3-9925-b47c40fa47d6 r/w with ordered data mode. Quota mode: none. May 15 05:02:20.376242 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 05:02:20.377263 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 05:02:20.385545 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 05:02:20.389483 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 05:02:20.390299 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 05:02:20.394808 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 15 05:02:20.416210 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (746) May 15 05:02:20.416253 kernel: BTRFS info (device vda6): first mount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 05:02:20.416274 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 05:02:20.416293 kernel: BTRFS info (device vda6): using free space tree May 15 05:02:20.396428 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 05:02:20.396459 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 05:02:20.420101 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 05:02:20.429400 kernel: BTRFS info (device vda6): auto enabling async discard May 15 05:02:20.429643 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 05:02:20.446546 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 05:02:20.553154 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory May 15 05:02:20.563784 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory May 15 05:02:20.571829 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory May 15 05:02:20.577155 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory May 15 05:02:20.662779 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 05:02:20.667427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 05:02:20.671097 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 05:02:20.678542 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 05:02:20.679733 kernel: BTRFS info (device vda6): last unmount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 05:02:20.703013 ignition[863]: INFO : Ignition 2.20.0 May 15 05:02:20.703013 ignition[863]: INFO : Stage: mount May 15 05:02:20.703013 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 05:02:20.703013 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 05:02:20.706081 ignition[863]: INFO : mount: mount passed May 15 05:02:20.706081 ignition[863]: INFO : Ignition finished successfully May 15 05:02:20.705632 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 05:02:20.711244 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 05:02:20.904669 systemd-networkd[706]: eth0: Gained IPv6LL May 15 05:02:27.641850 coreos-metadata[748]: May 15 05:02:27.641 WARN failed to locate config-drive, using the metadata service API instead May 15 05:02:27.681651 coreos-metadata[748]: May 15 05:02:27.681 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 15 05:02:27.703326 coreos-metadata[748]: May 15 05:02:27.703 INFO Fetch successful May 15 05:02:27.704890 coreos-metadata[748]: May 15 05:02:27.704 INFO wrote hostname ci-4152-2-3-n-5005c4e40f.novalocal to /sysroot/etc/hostname May 15 05:02:27.708117 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 15 05:02:27.708436 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 15 05:02:27.720544 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 05:02:27.746695 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 05:02:27.775424 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (881) May 15 05:02:27.782902 kernel: BTRFS info (device vda6): first mount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 05:02:27.782973 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 05:02:27.787260 kernel: BTRFS info (device vda6): using free space tree May 15 05:02:27.798424 kernel: BTRFS info (device vda6): auto enabling async discard May 15 05:02:27.804648 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 05:02:27.847587 ignition[899]: INFO : Ignition 2.20.0 May 15 05:02:27.847587 ignition[899]: INFO : Stage: files May 15 05:02:27.850598 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 05:02:27.850598 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 05:02:27.850598 ignition[899]: DEBUG : files: compiled without relabeling support, skipping May 15 05:02:27.856303 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 05:02:27.856303 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 05:02:27.860467 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 05:02:27.860467 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 05:02:27.860467 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 05:02:27.858659 unknown[899]: wrote ssh authorized keys file for user: core May 15 05:02:27.867970 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 05:02:27.867970 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 05:02:27.957140 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 05:02:28.294045 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 05:02:28.294045 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 05:02:28.294045 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 05:02:29.052826 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 05:02:29.621846 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 05:02:29.621846 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 05:02:29.626055 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 15 05:02:30.106328 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 05:02:31.567855 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 05:02:31.567855 ignition[899]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 05:02:31.572913 ignition[899]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 05:02:31.572913 ignition[899]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 05:02:31.572913 ignition[899]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 05:02:31.572913 ignition[899]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 15 05:02:31.572913 ignition[899]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 15 05:02:31.572913 ignition[899]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 05:02:31.572913 ignition[899]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 05:02:31.572913 ignition[899]: INFO : files: files passed May 15 05:02:31.572913 ignition[899]: INFO : Ignition finished successfully May 15 05:02:31.572640 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 05:02:31.590455 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 05:02:31.592702 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 05:02:31.595228 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 05:02:31.595363 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 05:02:31.621299 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 05:02:31.622395 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 05:02:31.623167 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 05:02:31.625707 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 05:02:31.626473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 05:02:31.633522 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 05:02:31.683090 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 05:02:31.683194 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 05:02:31.685321 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 05:02:31.687125 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 05:02:31.689197 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 05:02:31.705464 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 05:02:31.718818 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 05:02:31.724605 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 05:02:31.733658 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 05:02:31.734397 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 05:02:31.736550 systemd[1]: Stopped target timers.target - Timer Units. May 15 05:02:31.738512 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 05:02:31.738660 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 05:02:31.740828 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 05:02:31.741875 systemd[1]: Stopped target basic.target - Basic System. May 15 05:02:31.743957 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 05:02:31.745622 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 05:02:31.747218 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 05:02:31.749277 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 05:02:31.751272 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 05:02:31.753371 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 05:02:31.755320 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 05:02:31.757362 systemd[1]: Stopped target swap.target - Swaps. May 15 05:02:31.759243 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 05:02:31.759377 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 05:02:31.761527 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 05:02:31.762541 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 05:02:31.764161 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 05:02:31.764259 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 05:02:31.766198 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 05:02:31.766353 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 05:02:31.769172 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 05:02:31.769291 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 05:02:31.770247 systemd[1]: ignition-files.service: Deactivated successfully. May 15 05:02:31.770371 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 05:02:31.781820 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 05:02:31.782409 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 05:02:31.782543 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 05:02:31.786920 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 05:02:31.788196 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 05:02:31.789143 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 05:02:31.791202 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 05:02:31.791395 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 05:02:31.797583 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 05:02:31.798385 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 05:02:31.805710 ignition[953]: INFO : Ignition 2.20.0 May 15 05:02:31.807412 ignition[953]: INFO : Stage: umount May 15 05:02:31.807412 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 05:02:31.807412 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 05:02:31.807412 ignition[953]: INFO : umount: umount passed May 15 05:02:31.807412 ignition[953]: INFO : Ignition finished successfully May 15 05:02:31.808533 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 05:02:31.809385 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 05:02:31.810532 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 05:02:31.810576 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 05:02:31.811733 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 05:02:31.811773 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 05:02:31.813683 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 05:02:31.813721 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 05:02:31.814688 systemd[1]: Stopped target network.target - Network. May 15 05:02:31.815977 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 05:02:31.816024 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 05:02:31.818367 systemd[1]: Stopped target paths.target - Path Units. May 15 05:02:31.821150 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 05:02:31.824378 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 05:02:31.825415 systemd[1]: Stopped target slices.target - Slice Units. May 15 05:02:31.826397 systemd[1]: Stopped target sockets.target - Socket Units. May 15 05:02:31.827589 systemd[1]: iscsid.socket: Deactivated successfully. May 15 05:02:31.827627 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 05:02:31.828919 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 05:02:31.828954 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 05:02:31.829953 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 05:02:31.829997 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 05:02:31.830972 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 05:02:31.831011 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 05:02:31.832469 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 05:02:31.833587 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 05:02:31.835489 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 05:02:31.835990 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 05:02:31.836069 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 05:02:31.836387 systemd-networkd[706]: eth0: DHCPv6 lease lost May 15 05:02:31.837822 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 05:02:31.837912 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 05:02:31.839123 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 05:02:31.839171 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 05:02:31.840563 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 05:02:31.840607 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 05:02:31.848470 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 05:02:31.853153 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 05:02:31.853209 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 05:02:31.854487 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 05:02:31.855748 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 05:02:31.855832 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 05:02:31.864709 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 05:02:31.864857 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 05:02:31.867002 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 05:02:31.867086 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 05:02:31.869257 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 05:02:31.869321 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 05:02:31.870125 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 05:02:31.870156 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 05:02:31.871280 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 05:02:31.871323 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 05:02:31.872953 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 05:02:31.872992 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 05:02:31.874155 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 05:02:31.874194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 05:02:31.884672 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 05:02:31.885243 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 05:02:31.885293 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 05:02:31.885840 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 05:02:31.885879 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 05:02:31.886422 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 05:02:31.886460 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 05:02:31.887664 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 05:02:31.887703 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 05:02:31.888709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 05:02:31.888748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 05:02:31.892587 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 05:02:31.892678 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 05:02:31.893967 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 05:02:31.900500 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 05:02:31.906865 systemd[1]: Switching root. May 15 05:02:31.943591 systemd-journald[185]: Journal stopped May 15 05:02:33.613120 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 15 05:02:33.613174 kernel: SELinux: policy capability network_peer_controls=1 May 15 05:02:33.613191 kernel: SELinux: policy capability open_perms=1 May 15 05:02:33.613209 kernel: SELinux: policy capability extended_socket_class=1 May 15 05:02:33.613220 kernel: SELinux: policy capability always_check_network=0 May 15 05:02:33.613235 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 05:02:33.613260 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 05:02:33.613282 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 05:02:33.613293 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 05:02:33.613304 kernel: audit: type=1403 audit(1747285352.670:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 05:02:33.613320 systemd[1]: Successfully loaded SELinux policy in 74.809ms. May 15 05:02:33.613357 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.142ms. May 15 05:02:33.613372 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 05:02:33.613385 systemd[1]: Detected virtualization kvm. May 15 05:02:33.613397 systemd[1]: Detected architecture x86-64. May 15 05:02:33.613412 systemd[1]: Detected first boot. May 15 05:02:33.613425 systemd[1]: Hostname set to . May 15 05:02:33.613436 systemd[1]: Initializing machine ID from VM UUID. May 15 05:02:33.613450 zram_generator::config[995]: No configuration found. May 15 05:02:33.613466 systemd[1]: Populated /etc with preset unit settings. May 15 05:02:33.613478 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 05:02:33.613490 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 05:02:33.613502 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 05:02:33.613514 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 05:02:33.613526 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 05:02:33.613538 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 05:02:33.613549 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 05:02:33.613561 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 05:02:33.613577 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 05:02:33.613590 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 05:02:33.613602 systemd[1]: Created slice user.slice - User and Session Slice. May 15 05:02:33.613617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 05:02:33.613630 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 05:02:33.613647 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 05:02:33.613659 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 05:02:33.613672 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 05:02:33.613688 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 05:02:33.613700 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 05:02:33.613712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 05:02:33.613726 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 05:02:33.613740 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 05:02:33.613752 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 05:02:33.613767 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 05:02:33.613779 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 05:02:33.613792 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 05:02:33.613805 systemd[1]: Reached target slices.target - Slice Units. May 15 05:02:33.613817 systemd[1]: Reached target swap.target - Swaps. May 15 05:02:33.613830 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 05:02:33.613842 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 05:02:33.613854 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 05:02:33.613867 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 05:02:33.613879 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 05:02:33.613894 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 05:02:33.613908 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 05:02:33.613920 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 05:02:33.613932 systemd[1]: Mounting media.mount - External Media Directory... May 15 05:02:33.613945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 05:02:33.613958 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 05:02:33.613970 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 05:02:33.613983 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 05:02:33.613997 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 05:02:33.614013 systemd[1]: Reached target machines.target - Containers. May 15 05:02:33.614025 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 05:02:33.614038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 05:02:33.614050 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 05:02:33.614063 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 05:02:33.614075 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 05:02:33.614088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 05:02:33.614100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 05:02:33.614115 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 05:02:33.614129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 05:02:33.614142 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 05:02:33.614154 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 05:02:33.614167 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 05:02:33.614179 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 05:02:33.614191 systemd[1]: Stopped systemd-fsck-usr.service. May 15 05:02:33.614203 kernel: fuse: init (API version 7.39) May 15 05:02:33.614217 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 05:02:33.614230 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 05:02:33.614243 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 05:02:33.614255 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 05:02:33.614269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 05:02:33.614282 systemd[1]: verity-setup.service: Deactivated successfully. May 15 05:02:33.614295 systemd[1]: Stopped verity-setup.service. May 15 05:02:33.614307 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 05:02:33.614336 systemd-journald[1091]: Collecting audit messages is disabled. May 15 05:02:33.615029 kernel: loop: module loaded May 15 05:02:33.615044 kernel: ACPI: bus type drm_connector registered May 15 05:02:33.615057 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 05:02:33.615072 systemd-journald[1091]: Journal started May 15 05:02:33.615104 systemd-journald[1091]: Runtime Journal (/run/log/journal/f1ac09b8d995455883619f2a6726a479) is 8.0M, max 78.3M, 70.3M free. May 15 05:02:33.264103 systemd[1]: Queued start job for default target multi-user.target. May 15 05:02:33.286967 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 05:02:33.287322 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 05:02:33.618415 systemd[1]: Started systemd-journald.service - Journal Service. May 15 05:02:33.618685 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 05:02:33.619421 systemd[1]: Mounted media.mount - External Media Directory. May 15 05:02:33.620488 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 05:02:33.621086 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 05:02:33.622817 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 05:02:33.623751 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 05:02:33.624548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 05:02:33.625295 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 05:02:33.625438 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 05:02:33.626220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 05:02:33.626334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 05:02:33.627068 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 05:02:33.627175 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 05:02:33.628105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 05:02:33.628219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 05:02:33.628980 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 05:02:33.629093 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 05:02:33.629869 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 05:02:33.629991 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 05:02:33.630713 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 05:02:33.631498 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 05:02:33.632256 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 05:02:33.643762 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 05:02:33.652060 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 05:02:33.657137 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 05:02:33.658151 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 05:02:33.658189 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 05:02:33.659814 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 05:02:33.668793 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 05:02:33.672450 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 05:02:33.673068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 05:02:33.679059 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 05:02:33.690235 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 05:02:33.690858 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 05:02:33.694459 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 05:02:33.695052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 05:02:33.698475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 05:02:33.704521 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 05:02:33.707909 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 05:02:33.709948 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 05:02:33.710710 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 05:02:33.711836 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 05:02:33.713011 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 05:02:33.725443 systemd-journald[1091]: Time spent on flushing to /var/log/journal/f1ac09b8d995455883619f2a6726a479 is 70.930ms for 947 entries. May 15 05:02:33.725443 systemd-journald[1091]: System Journal (/var/log/journal/f1ac09b8d995455883619f2a6726a479) is 8.0M, max 584.8M, 576.8M free. May 15 05:02:33.821569 systemd-journald[1091]: Received client request to flush runtime journal. May 15 05:02:33.821617 kernel: loop0: detected capacity change from 0 to 8 May 15 05:02:33.821642 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 05:02:33.821660 kernel: loop1: detected capacity change from 0 to 205544 May 15 05:02:33.722259 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 05:02:33.743278 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 05:02:33.754554 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 05:02:33.761606 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 05:02:33.762703 udevadm[1134]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 05:02:33.786249 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 05:02:33.822968 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 05:02:33.869535 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 05:02:33.875275 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 05:02:33.878121 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 05:02:33.894773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 05:02:33.904217 kernel: loop2: detected capacity change from 0 to 140992 May 15 05:02:33.918037 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. May 15 05:02:33.918057 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. May 15 05:02:33.922685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 05:02:33.970388 kernel: loop3: detected capacity change from 0 to 138184 May 15 05:02:34.028757 kernel: loop4: detected capacity change from 0 to 8 May 15 05:02:34.032379 kernel: loop5: detected capacity change from 0 to 205544 May 15 05:02:34.127493 kernel: loop6: detected capacity change from 0 to 140992 May 15 05:02:34.159377 kernel: loop7: detected capacity change from 0 to 138184 May 15 05:02:34.213516 (sd-merge)[1153]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 15 05:02:34.214571 (sd-merge)[1153]: Merged extensions into '/usr'. May 15 05:02:34.222779 systemd[1]: Reloading requested from client PID 1128 ('systemd-sysext') (unit systemd-sysext.service)... May 15 05:02:34.222909 systemd[1]: Reloading... May 15 05:02:34.312380 zram_generator::config[1180]: No configuration found. May 15 05:02:34.541966 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 05:02:34.627092 systemd[1]: Reloading finished in 403 ms. May 15 05:02:34.664226 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 05:02:34.666051 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 05:02:34.680648 systemd[1]: Starting ensure-sysext.service... May 15 05:02:34.684641 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 05:02:34.688753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 05:02:34.704463 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... May 15 05:02:34.704480 systemd[1]: Reloading... May 15 05:02:34.715162 ldconfig[1123]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 05:02:34.719118 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 05:02:34.720644 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 05:02:34.722464 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 05:02:34.723091 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. May 15 05:02:34.723156 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. May 15 05:02:34.727047 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. May 15 05:02:34.727059 systemd-tmpfiles[1237]: Skipping /boot May 15 05:02:34.742782 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. May 15 05:02:34.742795 systemd-tmpfiles[1237]: Skipping /boot May 15 05:02:34.754754 systemd-udevd[1238]: Using default interface naming scheme 'v255'. May 15 05:02:34.785401 zram_generator::config[1261]: No configuration found. May 15 05:02:34.909461 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1273) May 15 05:02:34.983785 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 05:02:34.991385 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 15 05:02:35.004711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 05:02:35.009405 kernel: ACPI: button: Power Button [PWRF] May 15 05:02:35.022513 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 05:02:35.065376 kernel: mousedev: PS/2 mouse device common for all mice May 15 05:02:35.080034 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 15 05:02:35.080103 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 15 05:02:35.087075 kernel: Console: switching to colour dummy device 80x25 May 15 05:02:35.087122 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 15 05:02:35.087143 kernel: [drm] features: -context_init May 15 05:02:35.091404 kernel: [drm] number of scanouts: 1 May 15 05:02:35.094377 kernel: [drm] number of cap sets: 0 May 15 05:02:35.097369 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 15 05:02:35.109175 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 15 05:02:35.109268 kernel: Console: switching to colour frame buffer device 160x50 May 15 05:02:35.105877 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 05:02:35.115950 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 15 05:02:35.116542 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 05:02:35.116797 systemd[1]: Reloading finished in 412 ms. May 15 05:02:35.128271 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 05:02:35.130585 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 05:02:35.138679 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 05:02:35.160029 systemd[1]: Finished ensure-sysext.service. May 15 05:02:35.166054 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 05:02:35.181198 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 05:02:35.185482 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 05:02:35.188489 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 05:02:35.190581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 05:02:35.192488 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 05:02:35.194223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 05:02:35.197502 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 05:02:35.200502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 05:02:35.208562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 05:02:35.210210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 05:02:35.212504 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 05:02:35.218548 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 05:02:35.222666 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 05:02:35.226149 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 05:02:35.240538 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 05:02:35.244537 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 05:02:35.247533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 05:02:35.249686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 05:02:35.250405 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 05:02:35.250558 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 05:02:35.250936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 05:02:35.251069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 05:02:35.252496 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 05:02:35.252672 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 05:02:35.258142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 05:02:35.258283 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 05:02:35.259115 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 05:02:35.277090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 05:02:35.277265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 05:02:35.284443 lvm[1359]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 05:02:35.285673 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 05:02:35.291169 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 05:02:35.318488 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 05:02:35.329556 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 05:02:35.335309 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 05:02:35.339225 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 05:02:35.353172 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 05:02:35.365390 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 05:02:35.366298 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 05:02:35.368485 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 05:02:35.371877 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 05:02:35.377388 augenrules[1409]: No rules May 15 05:02:35.379741 systemd[1]: audit-rules.service: Deactivated successfully. May 15 05:02:35.382220 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 05:02:35.382681 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 05:02:35.411900 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 05:02:35.472607 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 05:02:35.475602 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 05:02:35.479161 systemd[1]: Reached target time-set.target - System Time Set. May 15 05:02:35.509385 systemd-resolved[1372]: Positive Trust Anchors: May 15 05:02:35.509406 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 05:02:35.509448 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 05:02:35.581764 systemd-networkd[1371]: lo: Link UP May 15 05:02:35.581785 systemd-networkd[1371]: lo: Gained carrier May 15 05:02:35.585883 systemd-networkd[1371]: Enumeration completed May 15 05:02:35.586582 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 05:02:35.589143 systemd-resolved[1372]: Using system hostname 'ci-4152-2-3-n-5005c4e40f.novalocal'. May 15 05:02:35.592405 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 05:02:35.593717 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 05:02:35.597739 systemd-networkd[1371]: eth0: Link UP May 15 05:02:35.597967 systemd-networkd[1371]: eth0: Gained carrier May 15 05:02:35.598160 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 05:02:35.605440 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 05:02:35.608055 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 05:02:35.615921 systemd[1]: Reached target network.target - Network. May 15 05:02:35.620421 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 05:02:35.624220 systemd-networkd[1371]: eth0: DHCPv4 address 172.24.4.5/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 15 05:02:35.624621 systemd[1]: Reached target sysinit.target - System Initialization. May 15 05:02:35.626069 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 05:02:35.626313 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 15 05:02:35.627293 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 05:02:35.633200 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 05:02:35.636688 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 05:02:35.639832 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 05:02:35.643142 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 05:02:35.643413 systemd[1]: Reached target paths.target - Path Units. May 15 05:02:35.646737 systemd[1]: Reached target timers.target - Timer Units. May 15 05:02:35.678667 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 05:02:35.688726 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 05:02:35.709875 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 05:02:35.715607 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 05:02:35.719106 systemd[1]: Reached target sockets.target - Socket Units. May 15 05:02:35.722449 systemd[1]: Reached target basic.target - Basic System. May 15 05:02:35.725707 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 05:02:35.725771 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 05:02:35.737634 systemd[1]: Starting containerd.service - containerd container runtime... May 15 05:02:35.747151 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 05:02:35.763966 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 05:02:35.778577 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 05:02:35.794077 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 05:02:35.796662 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 05:02:35.810530 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 05:02:35.815526 jq[1430]: false May 15 05:02:35.817951 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 05:02:35.823670 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 05:02:35.828569 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 05:02:35.839751 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 05:02:35.846140 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 05:02:35.846849 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 05:02:35.851577 systemd[1]: Starting update-engine.service - Update Engine... May 15 05:02:35.857195 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 05:02:35.867759 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 05:02:35.867946 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 05:02:35.875461 jq[1442]: true May 15 05:02:35.884715 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 05:02:35.884883 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 05:02:35.895314 jq[1449]: true May 15 05:02:35.974371 update_engine[1441]: I20250515 05:02:35.967511 1441 main.cc:92] Flatcar Update Engine starting May 15 05:02:35.970710 systemd[1]: motdgen.service: Deactivated successfully. May 15 05:02:35.970900 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 05:02:35.971777 systemd-logind[1440]: New seat seat0. May 15 05:02:35.976876 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) May 15 05:02:35.976901 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 05:02:35.977095 systemd[1]: Started systemd-logind.service - User Login Management. May 15 05:02:35.982967 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 05:02:35.984988 extend-filesystems[1433]: Found loop4 May 15 05:02:35.984988 extend-filesystems[1433]: Found loop5 May 15 05:02:35.984988 extend-filesystems[1433]: Found loop6 May 15 05:02:35.984988 extend-filesystems[1433]: Found loop7 May 15 05:02:35.984988 extend-filesystems[1433]: Found vda May 15 05:02:35.984988 extend-filesystems[1433]: Found vda1 May 15 05:02:35.984988 extend-filesystems[1433]: Found vda2 May 15 05:02:35.984988 extend-filesystems[1433]: Found vda3 May 15 05:02:35.984988 extend-filesystems[1433]: Found usr May 15 05:02:35.984988 extend-filesystems[1433]: Found vda4 May 15 05:02:35.984988 extend-filesystems[1433]: Found vda6 May 15 05:02:35.984988 extend-filesystems[1433]: Found vda7 May 15 05:02:35.984988 extend-filesystems[1433]: Found vda9 May 15 05:02:35.984988 extend-filesystems[1433]: Checking size of /dev/vda9 May 15 05:02:36.012458 tar[1444]: linux-amd64/helm May 15 05:02:36.055651 extend-filesystems[1433]: Resized partition /dev/vda9 May 15 05:02:36.073024 extend-filesystems[1484]: resize2fs 1.47.1 (20-May-2024) May 15 05:02:36.075323 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1273) May 15 05:02:36.066032 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 05:02:36.085480 bash[1467]: Updated "/home/core/.ssh/authorized_keys" May 15 05:02:36.100454 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 15 05:02:36.085683 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 05:02:36.100733 systemd[1]: Starting sshkeys.service... May 15 05:02:36.112216 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 15 05:02:36.117899 dbus-daemon[1429]: [system] SELinux support is enabled May 15 05:02:36.129325 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.systemd1' May 15 05:02:36.180238 update_engine[1441]: I20250515 05:02:36.137176 1441 update_check_scheduler.cc:74] Next update check in 4m13s May 15 05:02:36.122594 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 05:02:36.129822 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 05:02:36.129850 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 05:02:36.133308 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 05:02:36.133330 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 05:02:36.144517 systemd[1]: Started update-engine.service - Update Engine. May 15 05:02:36.156770 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 05:02:36.181451 extend-filesystems[1484]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 05:02:36.181451 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 05:02:36.181451 extend-filesystems[1484]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 15 05:02:36.197742 extend-filesystems[1433]: Resized filesystem in /dev/vda9 May 15 05:02:36.185435 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 05:02:36.201752 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 05:02:36.206135 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 05:02:36.207520 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 05:02:36.428954 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 05:02:36.517642 containerd[1475]: time="2025-05-15T05:02:36.517566241Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 05:02:36.542675 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 05:02:36.560918 containerd[1475]: time="2025-05-15T05:02:36.560629225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.564750655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.564787554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.564808183Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.564989203Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.565010312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.565081456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.565100722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.565270430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.565289967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.565305275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 05:02:36.565912 containerd[1475]: time="2025-05-15T05:02:36.565317939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 05:02:36.566193 containerd[1475]: time="2025-05-15T05:02:36.565423527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 05:02:36.566193 containerd[1475]: time="2025-05-15T05:02:36.565634693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 05:02:36.566193 containerd[1475]: time="2025-05-15T05:02:36.565734460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 05:02:36.566193 containerd[1475]: time="2025-05-15T05:02:36.565752674Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 05:02:36.566193 containerd[1475]: time="2025-05-15T05:02:36.565831492Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 05:02:36.566193 containerd[1475]: time="2025-05-15T05:02:36.565881807Z" level=info msg="metadata content store policy set" policy=shared May 15 05:02:36.574955 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.577930714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578005665Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578030451Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578064145Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578087068Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578280751Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578589971Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578702452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578723371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578740944Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578760260Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578778344Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578800415Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 05:02:36.580551 containerd[1475]: time="2025-05-15T05:02:36.578819020Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578837114Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578853455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578868874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578886938Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578911524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578931151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578948924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578965936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578981014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.578998126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.579013816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.579029064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.579044393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 05:02:36.580937 containerd[1475]: time="2025-05-15T05:02:36.579060393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579076633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579090680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579105828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579122610Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579146845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579164098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579176601Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579229019Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579249438Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579271369Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579293440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579311454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579327324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 05:02:36.581244 containerd[1475]: time="2025-05-15T05:02:36.579357260Z" level=info msg="NRI interface is disabled by configuration." May 15 05:02:36.582603 containerd[1475]: time="2025-05-15T05:02:36.579370535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 05:02:36.582628 containerd[1475]: time="2025-05-15T05:02:36.579861916Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 05:02:36.582628 containerd[1475]: time="2025-05-15T05:02:36.579929383Z" level=info msg="Connect containerd service" May 15 05:02:36.582628 containerd[1475]: time="2025-05-15T05:02:36.579972173Z" level=info msg="using legacy CRI server" May 15 05:02:36.582628 containerd[1475]: time="2025-05-15T05:02:36.579980449Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 05:02:36.582628 containerd[1475]: time="2025-05-15T05:02:36.580117736Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 05:02:36.585635 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 05:02:36.589309 containerd[1475]: time="2025-05-15T05:02:36.589004513Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 05:02:36.593101 containerd[1475]: time="2025-05-15T05:02:36.589434880Z" level=info msg="Start subscribing containerd event" May 15 05:02:36.593101 containerd[1475]: time="2025-05-15T05:02:36.589525801Z" level=info msg="Start recovering state" May 15 05:02:36.593101 containerd[1475]: time="2025-05-15T05:02:36.589607254Z" level=info msg="Start event monitor" May 15 05:02:36.593101 containerd[1475]: time="2025-05-15T05:02:36.589626349Z" level=info msg="Start snapshots syncer" May 15 05:02:36.593101 containerd[1475]: time="2025-05-15T05:02:36.589637571Z" level=info msg="Start cni network conf syncer for default" May 15 05:02:36.593101 containerd[1475]: time="2025-05-15T05:02:36.589647730Z" level=info msg="Start streaming server" May 15 05:02:36.593101 containerd[1475]: time="2025-05-15T05:02:36.589475266Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 05:02:36.593101 containerd[1475]: time="2025-05-15T05:02:36.589816486Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 05:02:36.593101 containerd[1475]: time="2025-05-15T05:02:36.589864616Z" level=info msg="containerd successfully booted in 0.073257s" May 15 05:02:36.597868 systemd[1]: Started sshd@0-172.24.4.5:22-172.24.4.1:48902.service - OpenSSH per-connection server daemon (172.24.4.1:48902). May 15 05:02:36.601063 systemd[1]: Started containerd.service - containerd container runtime. May 15 05:02:36.617621 systemd[1]: issuegen.service: Deactivated successfully. May 15 05:02:36.617918 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 05:02:36.634751 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 05:02:36.648451 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 05:02:36.658734 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 05:02:36.664231 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 05:02:36.666219 systemd[1]: Reached target getty.target - Login Prompts. May 15 05:02:36.877210 tar[1444]: linux-amd64/LICENSE May 15 05:02:36.877474 tar[1444]: linux-amd64/README.md May 15 05:02:36.904371 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 05:02:37.416625 systemd-networkd[1371]: eth0: Gained IPv6LL May 15 05:02:37.417730 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 15 05:02:37.421484 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 05:02:37.424745 systemd[1]: Reached target network-online.target - Network is Online. May 15 05:02:37.436135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 05:02:37.449844 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 05:02:37.499877 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 05:02:37.544013 sshd[1515]: Accepted publickey for core from 172.24.4.1 port 48902 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:02:37.545578 sshd-session[1515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:02:37.565401 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 05:02:37.580128 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 05:02:37.590316 systemd-logind[1440]: New session 1 of user core. May 15 05:02:37.599687 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 05:02:37.612727 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 05:02:37.625035 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 05:02:37.744973 systemd[1542]: Queued start job for default target default.target. May 15 05:02:37.750545 systemd[1542]: Created slice app.slice - User Application Slice. May 15 05:02:37.750570 systemd[1542]: Reached target paths.target - Paths. May 15 05:02:37.750585 systemd[1542]: Reached target timers.target - Timers. May 15 05:02:37.754460 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 05:02:37.764210 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 05:02:37.765030 systemd[1542]: Reached target sockets.target - Sockets. May 15 05:02:37.765154 systemd[1542]: Reached target basic.target - Basic System. May 15 05:02:37.765196 systemd[1542]: Reached target default.target - Main User Target. May 15 05:02:37.765226 systemd[1542]: Startup finished in 132ms. May 15 05:02:37.765771 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 05:02:37.776683 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 05:02:38.270131 systemd[1]: Started sshd@1-172.24.4.5:22-172.24.4.1:50652.service - OpenSSH per-connection server daemon (172.24.4.1:50652). May 15 05:02:39.401135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:02:39.416191 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 05:02:40.245313 sshd[1553]: Accepted publickey for core from 172.24.4.1 port 50652 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:02:40.246607 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:02:40.254254 systemd-logind[1440]: New session 2 of user core. May 15 05:02:40.261836 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 05:02:40.924747 sshd[1567]: Connection closed by 172.24.4.1 port 50652 May 15 05:02:40.925438 sshd-session[1553]: pam_unix(sshd:session): session closed for user core May 15 05:02:40.938197 systemd[1]: sshd@1-172.24.4.5:22-172.24.4.1:50652.service: Deactivated successfully. May 15 05:02:40.942479 systemd[1]: session-2.scope: Deactivated successfully. May 15 05:02:40.946581 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. May 15 05:02:40.953128 systemd[1]: Started sshd@2-172.24.4.5:22-172.24.4.1:50664.service - OpenSSH per-connection server daemon (172.24.4.1:50664). May 15 05:02:40.963851 systemd-logind[1440]: Removed session 2. May 15 05:02:41.619708 kubelet[1562]: E0515 05:02:41.619596 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 05:02:41.623757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 05:02:41.624099 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 05:02:41.624801 systemd[1]: kubelet.service: Consumed 2.004s CPU time. May 15 05:02:41.725840 login[1522]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 05:02:41.734690 login[1521]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 05:02:41.738260 systemd-logind[1440]: New session 3 of user core. May 15 05:02:41.749086 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 05:02:41.756900 systemd-logind[1440]: New session 4 of user core. May 15 05:02:41.766817 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 05:02:42.474558 sshd[1573]: Accepted publickey for core from 172.24.4.1 port 50664 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:02:42.477913 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:02:42.489200 systemd-logind[1440]: New session 5 of user core. May 15 05:02:42.496860 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 05:02:42.844488 coreos-metadata[1428]: May 15 05:02:42.843 WARN failed to locate config-drive, using the metadata service API instead May 15 05:02:42.892263 coreos-metadata[1428]: May 15 05:02:42.892 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 15 05:02:43.083873 coreos-metadata[1428]: May 15 05:02:43.083 INFO Fetch successful May 15 05:02:43.083873 coreos-metadata[1428]: May 15 05:02:43.083 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 15 05:02:43.099113 coreos-metadata[1428]: May 15 05:02:43.098 INFO Fetch successful May 15 05:02:43.099113 coreos-metadata[1428]: May 15 05:02:43.098 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 15 05:02:43.112872 coreos-metadata[1428]: May 15 05:02:43.112 INFO Fetch successful May 15 05:02:43.112872 coreos-metadata[1428]: May 15 05:02:43.112 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 15 05:02:43.126794 coreos-metadata[1428]: May 15 05:02:43.126 INFO Fetch successful May 15 05:02:43.126794 coreos-metadata[1428]: May 15 05:02:43.126 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 15 05:02:43.140244 coreos-metadata[1428]: May 15 05:02:43.140 INFO Fetch successful May 15 05:02:43.140244 coreos-metadata[1428]: May 15 05:02:43.140 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 15 05:02:43.151978 coreos-metadata[1428]: May 15 05:02:43.151 INFO Fetch successful May 15 05:02:43.206734 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 05:02:43.209247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 05:02:43.210703 sshd[1603]: Connection closed by 172.24.4.1 port 50664 May 15 05:02:43.211656 sshd-session[1573]: pam_unix(sshd:session): session closed for user core May 15 05:02:43.218487 systemd[1]: sshd@2-172.24.4.5:22-172.24.4.1:50664.service: Deactivated successfully. May 15 05:02:43.222875 systemd[1]: session-5.scope: Deactivated successfully. May 15 05:02:43.225060 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. May 15 05:02:43.227957 systemd-logind[1440]: Removed session 5. May 15 05:02:43.307874 coreos-metadata[1491]: May 15 05:02:43.307 WARN failed to locate config-drive, using the metadata service API instead May 15 05:02:43.349666 coreos-metadata[1491]: May 15 05:02:43.349 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 15 05:02:43.368652 coreos-metadata[1491]: May 15 05:02:43.368 INFO Fetch successful May 15 05:02:43.368854 coreos-metadata[1491]: May 15 05:02:43.368 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 15 05:02:43.384566 coreos-metadata[1491]: May 15 05:02:43.384 INFO Fetch successful May 15 05:02:43.389954 unknown[1491]: wrote ssh authorized keys file for user: core May 15 05:02:43.448079 update-ssh-keys[1616]: Updated "/home/core/.ssh/authorized_keys" May 15 05:02:43.449566 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 05:02:43.453068 systemd[1]: Finished sshkeys.service. May 15 05:02:43.458513 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 05:02:43.458977 systemd[1]: Startup finished in 1.255s (kernel) + 15.852s (initrd) + 10.862s (userspace) = 27.970s. May 15 05:02:51.860873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 05:02:51.871719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 05:02:52.252177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:02:52.260093 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 05:02:52.365608 kubelet[1628]: E0515 05:02:52.365469 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 05:02:52.372032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 05:02:52.372236 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 05:02:53.240087 systemd[1]: Started sshd@3-172.24.4.5:22-172.24.4.1:55448.service - OpenSSH per-connection server daemon (172.24.4.1:55448). May 15 05:02:54.372682 sshd[1636]: Accepted publickey for core from 172.24.4.1 port 55448 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:02:54.375741 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:02:54.386151 systemd-logind[1440]: New session 6 of user core. May 15 05:02:54.398738 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 05:02:54.998918 sshd[1638]: Connection closed by 172.24.4.1 port 55448 May 15 05:02:55.000155 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 15 05:02:55.015088 systemd[1]: sshd@3-172.24.4.5:22-172.24.4.1:55448.service: Deactivated successfully. May 15 05:02:55.019092 systemd[1]: session-6.scope: Deactivated successfully. May 15 05:02:55.023782 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. May 15 05:02:55.039146 systemd[1]: Started sshd@4-172.24.4.5:22-172.24.4.1:55456.service - OpenSSH per-connection server daemon (172.24.4.1:55456). May 15 05:02:55.043132 systemd-logind[1440]: Removed session 6. May 15 05:02:56.269171 sshd[1643]: Accepted publickey for core from 172.24.4.1 port 55456 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:02:56.272293 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:02:56.284839 systemd-logind[1440]: New session 7 of user core. May 15 05:02:56.298689 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 05:02:56.950108 sshd[1645]: Connection closed by 172.24.4.1 port 55456 May 15 05:02:56.952871 sshd-session[1643]: pam_unix(sshd:session): session closed for user core May 15 05:02:56.964450 systemd[1]: sshd@4-172.24.4.5:22-172.24.4.1:55456.service: Deactivated successfully. May 15 05:02:56.968122 systemd[1]: session-7.scope: Deactivated successfully. May 15 05:02:56.973698 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. May 15 05:02:56.979144 systemd[1]: Started sshd@5-172.24.4.5:22-172.24.4.1:55468.service - OpenSSH per-connection server daemon (172.24.4.1:55468). May 15 05:02:56.983083 systemd-logind[1440]: Removed session 7. May 15 05:02:58.598122 sshd[1650]: Accepted publickey for core from 172.24.4.1 port 55468 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:02:58.601325 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:02:58.612277 systemd-logind[1440]: New session 8 of user core. May 15 05:02:58.625688 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 05:02:59.497784 sshd[1652]: Connection closed by 172.24.4.1 port 55468 May 15 05:02:59.498780 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 15 05:02:59.511135 systemd[1]: sshd@5-172.24.4.5:22-172.24.4.1:55468.service: Deactivated successfully. May 15 05:02:59.514129 systemd[1]: session-8.scope: Deactivated successfully. May 15 05:02:59.517651 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. May 15 05:02:59.524898 systemd[1]: Started sshd@6-172.24.4.5:22-172.24.4.1:55484.service - OpenSSH per-connection server daemon (172.24.4.1:55484). May 15 05:02:59.528139 systemd-logind[1440]: Removed session 8. May 15 05:03:00.841309 sshd[1657]: Accepted publickey for core from 172.24.4.1 port 55484 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:03:00.843924 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:03:00.853933 systemd-logind[1440]: New session 9 of user core. May 15 05:03:00.867756 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 05:03:01.424052 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 05:03:01.424803 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 05:03:01.445584 sudo[1660]: pam_unix(sudo:session): session closed for user root May 15 05:03:01.624521 sshd[1659]: Connection closed by 172.24.4.1 port 55484 May 15 05:03:01.625523 sshd-session[1657]: pam_unix(sshd:session): session closed for user core May 15 05:03:01.638096 systemd[1]: sshd@6-172.24.4.5:22-172.24.4.1:55484.service: Deactivated successfully. May 15 05:03:01.642239 systemd[1]: session-9.scope: Deactivated successfully. May 15 05:03:01.646719 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. May 15 05:03:01.653077 systemd[1]: Started sshd@7-172.24.4.5:22-172.24.4.1:55488.service - OpenSSH per-connection server daemon (172.24.4.1:55488). May 15 05:03:01.657457 systemd-logind[1440]: Removed session 9. May 15 05:03:02.610812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 05:03:02.619909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 05:03:02.798650 sshd[1665]: Accepted publickey for core from 172.24.4.1 port 55488 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:03:02.803198 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:03:02.814046 systemd-logind[1440]: New session 10 of user core. May 15 05:03:02.824720 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 05:03:02.948814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:03:02.952805 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 05:03:03.003021 kubelet[1675]: E0515 05:03:03.002970 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 05:03:03.006770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 05:03:03.007038 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 05:03:03.166624 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 05:03:03.167254 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 05:03:03.174453 sudo[1684]: pam_unix(sudo:session): session closed for user root May 15 05:03:03.185644 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 05:03:03.186275 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 05:03:03.210575 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 05:03:03.279221 augenrules[1706]: No rules May 15 05:03:03.280593 systemd[1]: audit-rules.service: Deactivated successfully. May 15 05:03:03.280942 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 05:03:03.282708 sudo[1683]: pam_unix(sudo:session): session closed for user root May 15 05:03:03.538089 sshd[1670]: Connection closed by 172.24.4.1 port 55488 May 15 05:03:03.538919 sshd-session[1665]: pam_unix(sshd:session): session closed for user core May 15 05:03:03.550456 systemd[1]: sshd@7-172.24.4.5:22-172.24.4.1:55488.service: Deactivated successfully. May 15 05:03:03.554155 systemd[1]: session-10.scope: Deactivated successfully. May 15 05:03:03.556744 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. May 15 05:03:03.565030 systemd[1]: Started sshd@8-172.24.4.5:22-172.24.4.1:47486.service - OpenSSH per-connection server daemon (172.24.4.1:47486). May 15 05:03:03.567675 systemd-logind[1440]: Removed session 10. May 15 05:03:04.779151 sshd[1714]: Accepted publickey for core from 172.24.4.1 port 47486 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:03:04.781840 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:03:04.790807 systemd-logind[1440]: New session 11 of user core. May 15 05:03:04.798647 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 05:03:05.138998 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 05:03:05.140502 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 05:03:05.836572 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 05:03:05.849052 (dockerd)[1736]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 05:03:06.610803 dockerd[1736]: time="2025-05-15T05:03:06.610710385Z" level=info msg="Starting up" May 15 05:03:06.750026 systemd[1]: var-lib-docker-metacopy\x2dcheck2803186011-merged.mount: Deactivated successfully. May 15 05:03:06.784167 dockerd[1736]: time="2025-05-15T05:03:06.784080240Z" level=info msg="Loading containers: start." May 15 05:03:07.033659 kernel: Initializing XFRM netlink socket May 15 05:03:07.089853 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 15 05:03:07.802171 systemd-resolved[1372]: Clock change detected. Flushing caches. May 15 05:03:07.803885 systemd-timesyncd[1373]: Contacted time server 108.61.73.243:123 (2.flatcar.pool.ntp.org). May 15 05:03:07.804048 systemd-timesyncd[1373]: Initial clock synchronization to Thu 2025-05-15 05:03:07.801276 UTC. May 15 05:03:07.825935 systemd-networkd[1371]: docker0: Link UP May 15 05:03:07.866936 dockerd[1736]: time="2025-05-15T05:03:07.866835879Z" level=info msg="Loading containers: done." May 15 05:03:07.907855 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2119525845-merged.mount: Deactivated successfully. May 15 05:03:07.908605 dockerd[1736]: time="2025-05-15T05:03:07.907833100Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 05:03:07.908605 dockerd[1736]: time="2025-05-15T05:03:07.907944428Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 05:03:07.908605 dockerd[1736]: time="2025-05-15T05:03:07.908056198Z" level=info msg="Daemon has completed initialization" May 15 05:03:07.976891 dockerd[1736]: time="2025-05-15T05:03:07.976782905Z" level=info msg="API listen on /run/docker.sock" May 15 05:03:07.979901 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 05:03:09.550292 containerd[1475]: time="2025-05-15T05:03:09.550176013Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 05:03:10.354898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046980817.mount: Deactivated successfully. May 15 05:03:12.101045 containerd[1475]: time="2025-05-15T05:03:12.101000805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:12.103201 containerd[1475]: time="2025-05-15T05:03:12.103171135Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" May 15 05:03:12.104823 containerd[1475]: time="2025-05-15T05:03:12.104745578Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:12.108606 containerd[1475]: time="2025-05-15T05:03:12.108559130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:12.109920 containerd[1475]: time="2025-05-15T05:03:12.109778287Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.559528194s" May 15 05:03:12.109920 containerd[1475]: time="2025-05-15T05:03:12.109809886Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 15 05:03:12.112140 containerd[1475]: time="2025-05-15T05:03:12.112111242Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 05:03:13.724079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 05:03:13.731896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 05:03:13.824719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:03:13.828993 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 05:03:14.247228 containerd[1475]: time="2025-05-15T05:03:14.247053015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:14.256586 containerd[1475]: time="2025-05-15T05:03:14.256299216Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" May 15 05:03:14.267227 containerd[1475]: time="2025-05-15T05:03:14.265544335Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:14.281214 containerd[1475]: time="2025-05-15T05:03:14.281113525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:14.286133 containerd[1475]: time="2025-05-15T05:03:14.286052599Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.173723518s" May 15 05:03:14.286264 containerd[1475]: time="2025-05-15T05:03:14.286133360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 15 05:03:14.288619 containerd[1475]: time="2025-05-15T05:03:14.288564079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 05:03:14.302928 kubelet[1989]: E0515 05:03:14.302864 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 05:03:14.305708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 05:03:14.305907 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 05:03:15.951053 containerd[1475]: time="2025-05-15T05:03:15.950924906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:15.952828 containerd[1475]: time="2025-05-15T05:03:15.952582746Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" May 15 05:03:15.954007 containerd[1475]: time="2025-05-15T05:03:15.953926055Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:15.957353 containerd[1475]: time="2025-05-15T05:03:15.957288651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:15.958771 containerd[1475]: time="2025-05-15T05:03:15.958656046Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.66920872s" May 15 05:03:15.958771 containerd[1475]: time="2025-05-15T05:03:15.958688266Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 15 05:03:15.959243 containerd[1475]: time="2025-05-15T05:03:15.959196129Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 05:03:17.350663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87289564.mount: Deactivated successfully. May 15 05:03:18.199463 containerd[1475]: time="2025-05-15T05:03:18.199389902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:18.200798 containerd[1475]: time="2025-05-15T05:03:18.200608648Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" May 15 05:03:18.202174 containerd[1475]: time="2025-05-15T05:03:18.202108420Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:18.204725 containerd[1475]: time="2025-05-15T05:03:18.204680665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:18.205496 containerd[1475]: time="2025-05-15T05:03:18.205291410Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.245940401s" May 15 05:03:18.205496 containerd[1475]: time="2025-05-15T05:03:18.205369236Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 15 05:03:18.206171 containerd[1475]: time="2025-05-15T05:03:18.205972918Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 05:03:18.878210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88108216.mount: Deactivated successfully. May 15 05:03:20.678010 containerd[1475]: time="2025-05-15T05:03:20.677921567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:20.680773 containerd[1475]: time="2025-05-15T05:03:20.680660755Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 15 05:03:20.682661 containerd[1475]: time="2025-05-15T05:03:20.682530802Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:20.692703 containerd[1475]: time="2025-05-15T05:03:20.692602161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:20.696055 containerd[1475]: time="2025-05-15T05:03:20.695795039Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.489781574s" May 15 05:03:20.696055 containerd[1475]: time="2025-05-15T05:03:20.695873516Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 05:03:20.699375 containerd[1475]: time="2025-05-15T05:03:20.699250900Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 05:03:21.294078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524207229.mount: Deactivated successfully. May 15 05:03:21.307708 containerd[1475]: time="2025-05-15T05:03:21.307631358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:21.310123 containerd[1475]: time="2025-05-15T05:03:21.310003898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 15 05:03:21.313166 containerd[1475]: time="2025-05-15T05:03:21.312610506Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:21.319565 containerd[1475]: time="2025-05-15T05:03:21.319464902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:21.321814 containerd[1475]: time="2025-05-15T05:03:21.321755297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 622.4419ms" May 15 05:03:21.322029 containerd[1475]: time="2025-05-15T05:03:21.321986331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 05:03:21.323607 containerd[1475]: time="2025-05-15T05:03:21.323489490Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 05:03:22.117596 update_engine[1441]: I20250515 05:03:22.117463 1441 update_attempter.cc:509] Updating boot flags... May 15 05:03:22.170640 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2064) May 15 05:03:22.249384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2067) May 15 05:03:22.296994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289774214.mount: Deactivated successfully. May 15 05:03:22.305862 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2067) May 15 05:03:24.474382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 05:03:24.481493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 05:03:24.633482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:03:24.641731 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 05:03:24.833982 kubelet[2131]: E0515 05:03:24.805871 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 05:03:24.808667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 05:03:24.808960 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 05:03:25.172849 containerd[1475]: time="2025-05-15T05:03:25.171400426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:25.270382 containerd[1475]: time="2025-05-15T05:03:25.270249367Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" May 15 05:03:25.281167 containerd[1475]: time="2025-05-15T05:03:25.281029074Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:25.301423 containerd[1475]: time="2025-05-15T05:03:25.299402824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:25.301794 containerd[1475]: time="2025-05-15T05:03:25.301719579Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.977863602s" May 15 05:03:25.302039 containerd[1475]: time="2025-05-15T05:03:25.301980769Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 05:03:29.136693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:03:29.144831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 05:03:29.189844 systemd[1]: Reloading requested from client PID 2168 ('systemctl') (unit session-11.scope)... May 15 05:03:29.189879 systemd[1]: Reloading... May 15 05:03:29.289392 zram_generator::config[2207]: No configuration found. May 15 05:03:29.662710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 05:03:29.744059 systemd[1]: Reloading finished in 553 ms. May 15 05:03:29.788979 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 05:03:29.789053 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 05:03:29.789245 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:03:29.794571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 05:03:30.441249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:03:30.460594 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 05:03:30.504626 kubelet[2271]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 05:03:30.504626 kubelet[2271]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 05:03:30.504626 kubelet[2271]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 05:03:30.504626 kubelet[2271]: I0515 05:03:30.504624 2271 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 05:03:31.118309 kubelet[2271]: I0515 05:03:31.118257 2271 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 05:03:31.118437 kubelet[2271]: I0515 05:03:31.118315 2271 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 05:03:31.118861 kubelet[2271]: I0515 05:03:31.118835 2271 server.go:929] "Client rotation is on, will bootstrap in background" May 15 05:03:31.151883 kubelet[2271]: E0515 05:03:31.151844 2271 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:31.152530 kubelet[2271]: I0515 05:03:31.152419 2271 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 05:03:31.165850 kubelet[2271]: E0515 05:03:31.165768 2271 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 05:03:31.165850 kubelet[2271]: I0515 05:03:31.165847 2271 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 05:03:31.178046 kubelet[2271]: I0515 05:03:31.178009 2271 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 05:03:31.181588 kubelet[2271]: I0515 05:03:31.181546 2271 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 05:03:31.181932 kubelet[2271]: I0515 05:03:31.181867 2271 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 05:03:31.182290 kubelet[2271]: I0515 05:03:31.181929 2271 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-n-5005c4e40f.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 05:03:31.182407 kubelet[2271]: I0515 05:03:31.182294 2271 topology_manager.go:138] "Creating topology manager with none policy" May 15 05:03:31.182407 kubelet[2271]: I0515 05:03:31.182347 2271 container_manager_linux.go:300] "Creating device plugin manager" May 15 05:03:31.182570 kubelet[2271]: I0515 05:03:31.182532 2271 state_mem.go:36] "Initialized new in-memory state store" May 15 05:03:31.187131 kubelet[2271]: I0515 05:03:31.187088 2271 kubelet.go:408] "Attempting to sync node with API server" May 15 05:03:31.187178 kubelet[2271]: I0515 05:03:31.187135 2271 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 05:03:31.187205 kubelet[2271]: I0515 05:03:31.187182 2271 kubelet.go:314] "Adding apiserver pod source" May 15 05:03:31.187228 kubelet[2271]: I0515 05:03:31.187205 2271 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 05:03:31.188400 kubelet[2271]: W0515 05:03:31.188185 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-5005c4e40f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused May 15 05:03:31.188400 kubelet[2271]: E0515 05:03:31.188245 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-5005c4e40f.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:31.201859 kubelet[2271]: I0515 05:03:31.201725 2271 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 05:03:31.203304 kubelet[2271]: W0515 05:03:31.203101 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused May 15 05:03:31.203304 kubelet[2271]: E0515 05:03:31.203211 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:31.204026 kubelet[2271]: I0515 05:03:31.203986 2271 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 05:03:31.204128 kubelet[2271]: W0515 05:03:31.204046 2271 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 05:03:31.204640 kubelet[2271]: I0515 05:03:31.204598 2271 server.go:1269] "Started kubelet" May 15 05:03:31.207516 kubelet[2271]: I0515 05:03:31.207458 2271 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 05:03:31.211346 kubelet[2271]: I0515 05:03:31.209592 2271 server.go:460] "Adding debug handlers to kubelet server" May 15 05:03:31.211346 kubelet[2271]: I0515 05:03:31.211143 2271 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 05:03:31.211526 kubelet[2271]: I0515 05:03:31.211467 2271 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 05:03:31.214316 kubelet[2271]: I0515 05:03:31.214282 2271 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 05:03:31.214843 kubelet[2271]: E0515 05:03:31.211625 2271 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.5:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-3-n-5005c4e40f.novalocal.183f9ad2f01240a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-n-5005c4e40f.novalocal,UID:ci-4152-2-3-n-5005c4e40f.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-n-5005c4e40f.novalocal,},FirstTimestamp:2025-05-15 05:03:31.204579492 +0000 UTC m=+0.740867398,LastTimestamp:2025-05-15 05:03:31.204579492 +0000 UTC m=+0.740867398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-n-5005c4e40f.novalocal,}" May 15 05:03:31.216031 kubelet[2271]: I0515 05:03:31.215991 2271 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 05:03:31.219648 kubelet[2271]: E0515 05:03:31.219603 2271 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 05:03:31.219770 kubelet[2271]: E0515 05:03:31.219701 2271 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-5005c4e40f.novalocal\" not found" May 15 05:03:31.219770 kubelet[2271]: I0515 05:03:31.219725 2271 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 05:03:31.219895 kubelet[2271]: I0515 05:03:31.219873 2271 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 05:03:31.219955 kubelet[2271]: I0515 05:03:31.219916 2271 reconciler.go:26] "Reconciler: start to sync state" May 15 05:03:31.220583 kubelet[2271]: W0515 05:03:31.220518 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused May 15 05:03:31.220583 kubelet[2271]: E0515 05:03:31.220580 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:31.221492 kubelet[2271]: E0515 05:03:31.221189 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-5005c4e40f.novalocal?timeout=10s\": dial tcp 172.24.4.5:6443: connect: connection refused" interval="200ms" May 15 05:03:31.222466 kubelet[2271]: I0515 05:03:31.221815 2271 factory.go:221] Registration of the containerd container factory successfully May 15 05:03:31.222466 kubelet[2271]: I0515 05:03:31.221830 2271 factory.go:221] Registration of the systemd container factory successfully May 15 05:03:31.222466 kubelet[2271]: I0515 05:03:31.221885 2271 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 05:03:31.266275 kubelet[2271]: I0515 05:03:31.266220 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 05:03:31.267541 kubelet[2271]: I0515 05:03:31.267526 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 05:03:31.267654 kubelet[2271]: I0515 05:03:31.267631 2271 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 05:03:31.267894 kubelet[2271]: I0515 05:03:31.267882 2271 kubelet.go:2321] "Starting kubelet main sync loop" May 15 05:03:31.268068 kubelet[2271]: E0515 05:03:31.268049 2271 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 05:03:31.269740 kubelet[2271]: W0515 05:03:31.269720 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused May 15 05:03:31.269851 kubelet[2271]: E0515 05:03:31.269832 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:31.272278 kubelet[2271]: I0515 05:03:31.272043 2271 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 05:03:31.272278 kubelet[2271]: I0515 05:03:31.272057 2271 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 05:03:31.272278 kubelet[2271]: I0515 05:03:31.272071 2271 state_mem.go:36] "Initialized new in-memory state store" May 15 05:03:31.284243 kubelet[2271]: I0515 05:03:31.284223 2271 policy_none.go:49] "None policy: Start" May 15 05:03:31.285090 kubelet[2271]: I0515 05:03:31.285046 2271 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 05:03:31.285090 kubelet[2271]: I0515 05:03:31.285089 2271 state_mem.go:35] "Initializing new in-memory state store" May 15 05:03:31.293291 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 05:03:31.302401 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 05:03:31.307550 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 05:03:31.319201 kubelet[2271]: I0515 05:03:31.318176 2271 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 05:03:31.319201 kubelet[2271]: I0515 05:03:31.318363 2271 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 05:03:31.319201 kubelet[2271]: I0515 05:03:31.318374 2271 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 05:03:31.319201 kubelet[2271]: I0515 05:03:31.318688 2271 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 05:03:31.322274 kubelet[2271]: E0515 05:03:31.322236 2271 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-3-n-5005c4e40f.novalocal\" not found" May 15 05:03:31.391844 systemd[1]: Created slice kubepods-burstable-podb19a4a1e7820631f8d66c7b1fdc8a7c7.slice - libcontainer container kubepods-burstable-podb19a4a1e7820631f8d66c7b1fdc8a7c7.slice. May 15 05:03:31.413640 systemd[1]: Created slice kubepods-burstable-pod38e31986ae6a4cda0677e8759e2ad681.slice - libcontainer container kubepods-burstable-pod38e31986ae6a4cda0677e8759e2ad681.slice. May 15 05:03:31.423101 kubelet[2271]: E0515 05:03:31.421904 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-5005c4e40f.novalocal?timeout=10s\": dial tcp 172.24.4.5:6443: connect: connection refused" interval="400ms" May 15 05:03:31.423101 kubelet[2271]: I0515 05:03:31.422668 2271 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.423101 kubelet[2271]: E0515 05:03:31.423060 2271 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.5:6443/api/v1/nodes\": dial tcp 172.24.4.5:6443: connect: connection refused" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.431654 systemd[1]: Created slice kubepods-burstable-podd76605270b7eea9f4d9d21a7c835ba6b.slice - libcontainer container kubepods-burstable-podd76605270b7eea9f4d9d21a7c835ba6b.slice. May 15 05:03:31.521173 kubelet[2271]: I0515 05:03:31.521015 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.521173 kubelet[2271]: I0515 05:03:31.521105 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d76605270b7eea9f4d9d21a7c835ba6b-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"d76605270b7eea9f4d9d21a7c835ba6b\") " pod="kube-system/kube-scheduler-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.521173 kubelet[2271]: I0515 05:03:31.521160 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b19a4a1e7820631f8d66c7b1fdc8a7c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"b19a4a1e7820631f8d66c7b1fdc8a7c7\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.521951 kubelet[2271]: I0515 05:03:31.521209 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.521951 kubelet[2271]: I0515 05:03:31.521253 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.521951 kubelet[2271]: I0515 05:03:31.521297 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.521951 kubelet[2271]: I0515 05:03:31.521453 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.522198 kubelet[2271]: I0515 05:03:31.521500 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b19a4a1e7820631f8d66c7b1fdc8a7c7-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"b19a4a1e7820631f8d66c7b1fdc8a7c7\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.522198 kubelet[2271]: I0515 05:03:31.521543 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b19a4a1e7820631f8d66c7b1fdc8a7c7-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"b19a4a1e7820631f8d66c7b1fdc8a7c7\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.626920 kubelet[2271]: I0515 05:03:31.626817 2271 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.627762 kubelet[2271]: E0515 05:03:31.627678 2271 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.5:6443/api/v1/nodes\": dial tcp 172.24.4.5:6443: connect: connection refused" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:31.704482 containerd[1475]: time="2025-05-15T05:03:31.703486261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal,Uid:b19a4a1e7820631f8d66c7b1fdc8a7c7,Namespace:kube-system,Attempt:0,}" May 15 05:03:31.725844 containerd[1475]: time="2025-05-15T05:03:31.725687990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal,Uid:38e31986ae6a4cda0677e8759e2ad681,Namespace:kube-system,Attempt:0,}" May 15 05:03:31.744663 containerd[1475]: time="2025-05-15T05:03:31.744426214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-n-5005c4e40f.novalocal,Uid:d76605270b7eea9f4d9d21a7c835ba6b,Namespace:kube-system,Attempt:0,}" May 15 05:03:31.823501 kubelet[2271]: E0515 05:03:31.823427 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-5005c4e40f.novalocal?timeout=10s\": dial tcp 172.24.4.5:6443: connect: connection refused" interval="800ms" May 15 05:03:32.032033 kubelet[2271]: I0515 05:03:32.031806 2271 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:32.032880 kubelet[2271]: E0515 05:03:32.032728 2271 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.5:6443/api/v1/nodes\": dial tcp 172.24.4.5:6443: connect: connection refused" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:32.078381 kubelet[2271]: W0515 05:03:32.077359 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused May 15 05:03:32.078381 kubelet[2271]: E0515 05:03:32.077447 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:32.261487 kubelet[2271]: W0515 05:03:32.261220 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-5005c4e40f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused May 15 05:03:32.261715 kubelet[2271]: E0515 05:03:32.261553 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-5005c4e40f.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:32.399719 kubelet[2271]: W0515 05:03:32.399418 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused May 15 05:03:32.399719 kubelet[2271]: E0515 05:03:32.399584 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:32.597201 kubelet[2271]: W0515 05:03:32.596967 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused May 15 05:03:32.597201 kubelet[2271]: E0515 05:03:32.597138 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:32.624285 kubelet[2271]: E0515 05:03:32.624209 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-5005c4e40f.novalocal?timeout=10s\": dial tcp 172.24.4.5:6443: connect: connection refused" interval="1.6s" May 15 05:03:32.837617 kubelet[2271]: I0515 05:03:32.837537 2271 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:32.838080 kubelet[2271]: E0515 05:03:32.838010 2271 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.5:6443/api/v1/nodes\": dial tcp 172.24.4.5:6443: connect: connection refused" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:32.932690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4023207553.mount: Deactivated successfully. May 15 05:03:32.953757 containerd[1475]: time="2025-05-15T05:03:32.953666684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 05:03:32.959468 containerd[1475]: time="2025-05-15T05:03:32.959377605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 15 05:03:32.960584 containerd[1475]: time="2025-05-15T05:03:32.960499219Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 05:03:32.962796 containerd[1475]: time="2025-05-15T05:03:32.962678706Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 05:03:32.966841 containerd[1475]: time="2025-05-15T05:03:32.966701641Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 05:03:32.968672 containerd[1475]: time="2025-05-15T05:03:32.968424182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 05:03:32.970103 containerd[1475]: time="2025-05-15T05:03:32.970040423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 05:03:32.972380 containerd[1475]: time="2025-05-15T05:03:32.972156362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 05:03:32.982279 containerd[1475]: time="2025-05-15T05:03:32.981969346Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.278282058s" May 15 05:03:32.989476 containerd[1475]: time="2025-05-15T05:03:32.989162126Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.244368973s" May 15 05:03:32.990921 containerd[1475]: time="2025-05-15T05:03:32.990855011Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.264892336s" May 15 05:03:33.189420 kubelet[2271]: E0515 05:03:33.188983 2271 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" May 15 05:03:33.219152 containerd[1475]: time="2025-05-15T05:03:33.218912763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 05:03:33.219152 containerd[1475]: time="2025-05-15T05:03:33.218972685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 05:03:33.219152 containerd[1475]: time="2025-05-15T05:03:33.218991751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:33.219152 containerd[1475]: time="2025-05-15T05:03:33.219066031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:33.229160 containerd[1475]: time="2025-05-15T05:03:33.228540520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 05:03:33.229160 containerd[1475]: time="2025-05-15T05:03:33.228675203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 05:03:33.229160 containerd[1475]: time="2025-05-15T05:03:33.228755393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:33.229160 containerd[1475]: time="2025-05-15T05:03:33.229009079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:33.233221 containerd[1475]: time="2025-05-15T05:03:33.233062791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 05:03:33.233852 containerd[1475]: time="2025-05-15T05:03:33.233176685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 05:03:33.236413 containerd[1475]: time="2025-05-15T05:03:33.236333506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:33.236884 containerd[1475]: time="2025-05-15T05:03:33.236550412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:33.258524 systemd[1]: Started cri-containerd-bb2a87ef51277440570c5a06677462dcdcb79a1eac002379a8da5e58dcc6319f.scope - libcontainer container bb2a87ef51277440570c5a06677462dcdcb79a1eac002379a8da5e58dcc6319f. May 15 05:03:33.266474 systemd[1]: Started cri-containerd-fa396affd2772429328168a9257a6dde127f103a80845fec5a9cd18fc7c6e64e.scope - libcontainer container fa396affd2772429328168a9257a6dde127f103a80845fec5a9cd18fc7c6e64e. May 15 05:03:33.275528 systemd[1]: Started cri-containerd-176f246cc585a17017447d976a87a1e662ac79093ce14b16d4e024acda3df4d2.scope - libcontainer container 176f246cc585a17017447d976a87a1e662ac79093ce14b16d4e024acda3df4d2. May 15 05:03:33.325351 containerd[1475]: time="2025-05-15T05:03:33.323458008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal,Uid:b19a4a1e7820631f8d66c7b1fdc8a7c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb2a87ef51277440570c5a06677462dcdcb79a1eac002379a8da5e58dcc6319f\"" May 15 05:03:33.329856 containerd[1475]: time="2025-05-15T05:03:33.329498697Z" level=info msg="CreateContainer within sandbox \"bb2a87ef51277440570c5a06677462dcdcb79a1eac002379a8da5e58dcc6319f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 05:03:33.349146 containerd[1475]: time="2025-05-15T05:03:33.349033695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal,Uid:38e31986ae6a4cda0677e8759e2ad681,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa396affd2772429328168a9257a6dde127f103a80845fec5a9cd18fc7c6e64e\"" May 15 05:03:33.352784 containerd[1475]: time="2025-05-15T05:03:33.352761005Z" level=info msg="CreateContainer within sandbox \"fa396affd2772429328168a9257a6dde127f103a80845fec5a9cd18fc7c6e64e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 05:03:33.355133 containerd[1475]: time="2025-05-15T05:03:33.355071719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-n-5005c4e40f.novalocal,Uid:d76605270b7eea9f4d9d21a7c835ba6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"176f246cc585a17017447d976a87a1e662ac79093ce14b16d4e024acda3df4d2\"" May 15 05:03:33.358338 containerd[1475]: time="2025-05-15T05:03:33.358296537Z" level=info msg="CreateContainer within sandbox \"176f246cc585a17017447d976a87a1e662ac79093ce14b16d4e024acda3df4d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 05:03:33.388853 containerd[1475]: time="2025-05-15T05:03:33.388506336Z" level=info msg="CreateContainer within sandbox \"bb2a87ef51277440570c5a06677462dcdcb79a1eac002379a8da5e58dcc6319f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c9949430c3e36a260523b7eb1cc6b2b8402f30ef47ea2312220188ee1e09693d\"" May 15 05:03:33.389627 containerd[1475]: time="2025-05-15T05:03:33.389365758Z" level=info msg="StartContainer for \"c9949430c3e36a260523b7eb1cc6b2b8402f30ef47ea2312220188ee1e09693d\"" May 15 05:03:33.397035 containerd[1475]: time="2025-05-15T05:03:33.396973105Z" level=info msg="CreateContainer within sandbox \"fa396affd2772429328168a9257a6dde127f103a80845fec5a9cd18fc7c6e64e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7eb7ac7039aacfc85bb0fd5a9ffb980c707c6bcbaac5ea9ea53a4fb2460784e4\"" May 15 05:03:33.400272 containerd[1475]: time="2025-05-15T05:03:33.400200628Z" level=info msg="StartContainer for \"7eb7ac7039aacfc85bb0fd5a9ffb980c707c6bcbaac5ea9ea53a4fb2460784e4\"" May 15 05:03:33.413889 containerd[1475]: time="2025-05-15T05:03:33.413834769Z" level=info msg="CreateContainer within sandbox \"176f246cc585a17017447d976a87a1e662ac79093ce14b16d4e024acda3df4d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2957f80b0f6826f3059fda818d003d0e4887d57a1e38d05f1bcec9e56440e506\"" May 15 05:03:33.414403 containerd[1475]: time="2025-05-15T05:03:33.414373480Z" level=info msg="StartContainer for \"2957f80b0f6826f3059fda818d003d0e4887d57a1e38d05f1bcec9e56440e506\"" May 15 05:03:33.424605 systemd[1]: Started cri-containerd-c9949430c3e36a260523b7eb1cc6b2b8402f30ef47ea2312220188ee1e09693d.scope - libcontainer container c9949430c3e36a260523b7eb1cc6b2b8402f30ef47ea2312220188ee1e09693d. May 15 05:03:33.452573 systemd[1]: Started cri-containerd-7eb7ac7039aacfc85bb0fd5a9ffb980c707c6bcbaac5ea9ea53a4fb2460784e4.scope - libcontainer container 7eb7ac7039aacfc85bb0fd5a9ffb980c707c6bcbaac5ea9ea53a4fb2460784e4. May 15 05:03:33.466855 systemd[1]: Started cri-containerd-2957f80b0f6826f3059fda818d003d0e4887d57a1e38d05f1bcec9e56440e506.scope - libcontainer container 2957f80b0f6826f3059fda818d003d0e4887d57a1e38d05f1bcec9e56440e506. May 15 05:03:33.500350 containerd[1475]: time="2025-05-15T05:03:33.500275409Z" level=info msg="StartContainer for \"c9949430c3e36a260523b7eb1cc6b2b8402f30ef47ea2312220188ee1e09693d\" returns successfully" May 15 05:03:33.536407 containerd[1475]: time="2025-05-15T05:03:33.536364464Z" level=info msg="StartContainer for \"7eb7ac7039aacfc85bb0fd5a9ffb980c707c6bcbaac5ea9ea53a4fb2460784e4\" returns successfully" May 15 05:03:33.577000 containerd[1475]: time="2025-05-15T05:03:33.576895229Z" level=info msg="StartContainer for \"2957f80b0f6826f3059fda818d003d0e4887d57a1e38d05f1bcec9e56440e506\" returns successfully" May 15 05:03:34.441026 kubelet[2271]: I0515 05:03:34.440629 2271 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:35.489174 kubelet[2271]: I0515 05:03:35.489142 2271 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:35.491610 kubelet[2271]: E0515 05:03:35.489683 2271 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4152-2-3-n-5005c4e40f.novalocal\": node \"ci-4152-2-3-n-5005c4e40f.novalocal\" not found" May 15 05:03:36.192522 kubelet[2271]: I0515 05:03:36.192453 2271 apiserver.go:52] "Watching apiserver" May 15 05:03:36.220660 kubelet[2271]: I0515 05:03:36.220589 2271 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 05:03:36.301431 kubelet[2271]: E0515 05:03:36.301270 2271 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:37.716166 systemd[1]: Reloading requested from client PID 2545 ('systemctl') (unit session-11.scope)... May 15 05:03:37.716202 systemd[1]: Reloading... May 15 05:03:37.836388 zram_generator::config[2587]: No configuration found. May 15 05:03:37.968034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 05:03:38.067468 systemd[1]: Reloading finished in 350 ms. May 15 05:03:38.112812 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 05:03:38.130214 systemd[1]: kubelet.service: Deactivated successfully. May 15 05:03:38.130765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:03:38.130867 systemd[1]: kubelet.service: Consumed 1.316s CPU time, 116.5M memory peak, 0B memory swap peak. May 15 05:03:38.139071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 05:03:38.344478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 05:03:38.346267 (kubelet)[2648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 05:03:38.415401 kubelet[2648]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 05:03:38.415401 kubelet[2648]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 05:03:38.415401 kubelet[2648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 05:03:38.416057 kubelet[2648]: I0515 05:03:38.415507 2648 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 05:03:38.429628 kubelet[2648]: I0515 05:03:38.429585 2648 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 05:03:38.429628 kubelet[2648]: I0515 05:03:38.429613 2648 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 05:03:38.429882 kubelet[2648]: I0515 05:03:38.429857 2648 server.go:929] "Client rotation is on, will bootstrap in background" May 15 05:03:38.431596 kubelet[2648]: I0515 05:03:38.431566 2648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 05:03:38.433577 kubelet[2648]: I0515 05:03:38.433450 2648 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 05:03:38.436691 kubelet[2648]: E0515 05:03:38.436504 2648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 05:03:38.436691 kubelet[2648]: I0515 05:03:38.436532 2648 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 05:03:38.438891 kubelet[2648]: I0515 05:03:38.438875 2648 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 05:03:38.438974 kubelet[2648]: I0515 05:03:38.438960 2648 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 05:03:38.439101 kubelet[2648]: I0515 05:03:38.439067 2648 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 05:03:38.439255 kubelet[2648]: I0515 05:03:38.439095 2648 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-n-5005c4e40f.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 05:03:38.439355 kubelet[2648]: I0515 05:03:38.439255 2648 topology_manager.go:138] "Creating topology manager with none policy" May 15 05:03:38.439355 kubelet[2648]: I0515 05:03:38.439265 2648 container_manager_linux.go:300] "Creating device plugin manager" May 15 05:03:38.439355 kubelet[2648]: I0515 05:03:38.439289 2648 state_mem.go:36] "Initialized new in-memory state store" May 15 05:03:38.439439 kubelet[2648]: I0515 05:03:38.439386 2648 kubelet.go:408] "Attempting to sync node with API server" May 15 05:03:38.439439 kubelet[2648]: I0515 05:03:38.439398 2648 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 05:03:38.439439 kubelet[2648]: I0515 05:03:38.439439 2648 kubelet.go:314] "Adding apiserver pod source" May 15 05:03:38.440440 kubelet[2648]: I0515 05:03:38.439453 2648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 05:03:38.441862 kubelet[2648]: I0515 05:03:38.441839 2648 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 05:03:38.444329 kubelet[2648]: I0515 05:03:38.442626 2648 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 05:03:38.452107 kubelet[2648]: I0515 05:03:38.452084 2648 server.go:1269] "Started kubelet" May 15 05:03:38.454249 kubelet[2648]: I0515 05:03:38.454233 2648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 05:03:38.455521 kubelet[2648]: I0515 05:03:38.455447 2648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 05:03:38.456076 kubelet[2648]: I0515 05:03:38.456053 2648 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 05:03:38.456935 kubelet[2648]: I0515 05:03:38.456272 2648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 05:03:38.459351 kubelet[2648]: I0515 05:03:38.457879 2648 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 05:03:38.459351 kubelet[2648]: E0515 05:03:38.458750 2648 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-5005c4e40f.novalocal\" not found" May 15 05:03:38.459496 kubelet[2648]: I0515 05:03:38.459473 2648 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 05:03:38.459644 kubelet[2648]: I0515 05:03:38.459615 2648 reconciler.go:26] "Reconciler: start to sync state" May 15 05:03:38.461228 kubelet[2648]: I0515 05:03:38.460079 2648 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 05:03:38.469346 kubelet[2648]: I0515 05:03:38.467024 2648 server.go:460] "Adding debug handlers to kubelet server" May 15 05:03:38.470297 kubelet[2648]: I0515 05:03:38.470261 2648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 05:03:38.472499 kubelet[2648]: I0515 05:03:38.471122 2648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 05:03:38.472499 kubelet[2648]: I0515 05:03:38.471145 2648 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 05:03:38.472499 kubelet[2648]: I0515 05:03:38.471161 2648 kubelet.go:2321] "Starting kubelet main sync loop" May 15 05:03:38.472499 kubelet[2648]: E0515 05:03:38.471194 2648 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 05:03:38.480049 kubelet[2648]: I0515 05:03:38.480011 2648 factory.go:221] Registration of the containerd container factory successfully May 15 05:03:38.480472 kubelet[2648]: I0515 05:03:38.480460 2648 factory.go:221] Registration of the systemd container factory successfully May 15 05:03:38.482623 kubelet[2648]: I0515 05:03:38.480595 2648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 05:03:38.544192 kubelet[2648]: I0515 05:03:38.544171 2648 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 05:03:38.544760 kubelet[2648]: I0515 05:03:38.544368 2648 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 05:03:38.544760 kubelet[2648]: I0515 05:03:38.544389 2648 state_mem.go:36] "Initialized new in-memory state store" May 15 05:03:38.544760 kubelet[2648]: I0515 05:03:38.544533 2648 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 05:03:38.544760 kubelet[2648]: I0515 05:03:38.544546 2648 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 05:03:38.544760 kubelet[2648]: I0515 05:03:38.544563 2648 policy_none.go:49] "None policy: Start" May 15 05:03:38.545197 kubelet[2648]: I0515 05:03:38.545185 2648 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 05:03:38.545352 kubelet[2648]: I0515 05:03:38.545285 2648 state_mem.go:35] "Initializing new in-memory state store" May 15 05:03:38.545539 kubelet[2648]: I0515 05:03:38.545527 2648 state_mem.go:75] "Updated machine memory state" May 15 05:03:38.551174 kubelet[2648]: I0515 05:03:38.550828 2648 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 05:03:38.551949 kubelet[2648]: I0515 05:03:38.551497 2648 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 05:03:38.551949 kubelet[2648]: I0515 05:03:38.551531 2648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 05:03:38.551949 kubelet[2648]: I0515 05:03:38.551881 2648 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 05:03:38.580455 kubelet[2648]: W0515 05:03:38.580418 2648 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 05:03:38.581534 kubelet[2648]: W0515 05:03:38.581505 2648 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 05:03:38.582757 kubelet[2648]: W0515 05:03:38.582735 2648 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 05:03:38.657452 kubelet[2648]: I0515 05:03:38.657216 2648 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.676430 kubelet[2648]: I0515 05:03:38.676362 2648 kubelet_node_status.go:111] "Node was previously registered" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.678460 kubelet[2648]: I0515 05:03:38.676517 2648 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.718784 sudo[2680]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 05:03:38.719504 sudo[2680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 05:03:38.761144 kubelet[2648]: I0515 05:03:38.761058 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.762216 kubelet[2648]: I0515 05:03:38.761790 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.762216 kubelet[2648]: I0515 05:03:38.761926 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b19a4a1e7820631f8d66c7b1fdc8a7c7-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"b19a4a1e7820631f8d66c7b1fdc8a7c7\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.762216 kubelet[2648]: I0515 05:03:38.762120 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b19a4a1e7820631f8d66c7b1fdc8a7c7-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"b19a4a1e7820631f8d66c7b1fdc8a7c7\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.762963 kubelet[2648]: I0515 05:03:38.762600 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b19a4a1e7820631f8d66c7b1fdc8a7c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"b19a4a1e7820631f8d66c7b1fdc8a7c7\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.763651 kubelet[2648]: I0515 05:03:38.763098 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.763919 kubelet[2648]: I0515 05:03:38.763567 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.764260 kubelet[2648]: I0515 05:03:38.764053 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38e31986ae6a4cda0677e8759e2ad681-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"38e31986ae6a4cda0677e8759e2ad681\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:38.764526 kubelet[2648]: I0515 05:03:38.764401 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d76605270b7eea9f4d9d21a7c835ba6b-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-n-5005c4e40f.novalocal\" (UID: \"d76605270b7eea9f4d9d21a7c835ba6b\") " pod="kube-system/kube-scheduler-ci-4152-2-3-n-5005c4e40f.novalocal" May 15 05:03:39.333854 sudo[2680]: pam_unix(sudo:session): session closed for user root May 15 05:03:39.440812 kubelet[2648]: I0515 05:03:39.440759 2648 apiserver.go:52] "Watching apiserver" May 15 05:03:39.459756 kubelet[2648]: I0515 05:03:39.459672 2648 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 05:03:39.678666 kubelet[2648]: I0515 05:03:39.678528 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-5005c4e40f.novalocal" podStartSLOduration=1.678499406 podStartE2EDuration="1.678499406s" podCreationTimestamp="2025-05-15 05:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 05:03:39.67704072 +0000 UTC m=+1.327626585" watchObservedRunningTime="2025-05-15 05:03:39.678499406 +0000 UTC m=+1.329085280" May 15 05:03:39.779468 kubelet[2648]: I0515 05:03:39.779280 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-3-n-5005c4e40f.novalocal" podStartSLOduration=1.779253861 podStartE2EDuration="1.779253861s" podCreationTimestamp="2025-05-15 05:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 05:03:39.720216416 +0000 UTC m=+1.370802281" watchObservedRunningTime="2025-05-15 05:03:39.779253861 +0000 UTC m=+1.429839725" May 15 05:03:39.779663 kubelet[2648]: I0515 05:03:39.779595 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-3-n-5005c4e40f.novalocal" podStartSLOduration=1.7795835389999999 podStartE2EDuration="1.779583539s" podCreationTimestamp="2025-05-15 05:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 05:03:39.776500748 +0000 UTC m=+1.427086622" watchObservedRunningTime="2025-05-15 05:03:39.779583539 +0000 UTC m=+1.430169404" May 15 05:03:42.742830 sudo[1717]: pam_unix(sudo:session): session closed for user root May 15 05:03:42.939812 sshd[1716]: Connection closed by 172.24.4.1 port 47486 May 15 05:03:42.940197 sshd-session[1714]: pam_unix(sshd:session): session closed for user core May 15 05:03:42.952431 systemd[1]: sshd@8-172.24.4.5:22-172.24.4.1:47486.service: Deactivated successfully. May 15 05:03:42.958902 systemd[1]: session-11.scope: Deactivated successfully. May 15 05:03:42.960787 systemd[1]: session-11.scope: Consumed 7.178s CPU time, 148.5M memory peak, 0B memory swap peak. May 15 05:03:42.963456 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. May 15 05:03:42.968218 systemd-logind[1440]: Removed session 11. May 15 05:03:44.535346 kubelet[2648]: I0515 05:03:44.534949 2648 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 05:03:44.535928 containerd[1475]: time="2025-05-15T05:03:44.535884661Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 05:03:44.537532 kubelet[2648]: I0515 05:03:44.537470 2648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 05:03:45.410368 kubelet[2648]: I0515 05:03:45.408252 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1e674bb-eee9-4849-b25a-0a1096795f29-xtables-lock\") pod \"kube-proxy-ghj8p\" (UID: \"d1e674bb-eee9-4849-b25a-0a1096795f29\") " pod="kube-system/kube-proxy-ghj8p" May 15 05:03:45.410368 kubelet[2648]: I0515 05:03:45.408367 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1e674bb-eee9-4849-b25a-0a1096795f29-kube-proxy\") pod \"kube-proxy-ghj8p\" (UID: \"d1e674bb-eee9-4849-b25a-0a1096795f29\") " pod="kube-system/kube-proxy-ghj8p" May 15 05:03:45.410368 kubelet[2648]: I0515 05:03:45.408415 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1e674bb-eee9-4849-b25a-0a1096795f29-lib-modules\") pod \"kube-proxy-ghj8p\" (UID: \"d1e674bb-eee9-4849-b25a-0a1096795f29\") " pod="kube-system/kube-proxy-ghj8p" May 15 05:03:45.410368 kubelet[2648]: I0515 05:03:45.408464 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4znn\" (UniqueName: \"kubernetes.io/projected/d1e674bb-eee9-4849-b25a-0a1096795f29-kube-api-access-t4znn\") pod \"kube-proxy-ghj8p\" (UID: \"d1e674bb-eee9-4849-b25a-0a1096795f29\") " pod="kube-system/kube-proxy-ghj8p" May 15 05:03:45.420104 systemd[1]: Created slice kubepods-besteffort-podd1e674bb_eee9_4849_b25a_0a1096795f29.slice - libcontainer container kubepods-besteffort-podd1e674bb_eee9_4849_b25a_0a1096795f29.slice. May 15 05:03:45.438365 kubelet[2648]: W0515 05:03:45.437094 2648 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152-2-3-n-5005c4e40f.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-5005c4e40f.novalocal' and this object May 15 05:03:45.438365 kubelet[2648]: E0515 05:03:45.437137 2648 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4152-2-3-n-5005c4e40f.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152-2-3-n-5005c4e40f.novalocal' and this object" logger="UnhandledError" May 15 05:03:45.438365 kubelet[2648]: W0515 05:03:45.437201 2648 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-3-n-5005c4e40f.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-5005c4e40f.novalocal' and this object May 15 05:03:45.438365 kubelet[2648]: E0515 05:03:45.437218 2648 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4152-2-3-n-5005c4e40f.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152-2-3-n-5005c4e40f.novalocal' and this object" logger="UnhandledError" May 15 05:03:45.438642 kubelet[2648]: W0515 05:03:45.437310 2648 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-3-n-5005c4e40f.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-5005c4e40f.novalocal' and this object May 15 05:03:45.438869 kubelet[2648]: E0515 05:03:45.438812 2648 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4152-2-3-n-5005c4e40f.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152-2-3-n-5005c4e40f.novalocal' and this object" logger="UnhandledError" May 15 05:03:45.442591 systemd[1]: Created slice kubepods-burstable-pod0fedb74b_4281_479e_9e30_febbe6c42751.slice - libcontainer container kubepods-burstable-pod0fedb74b_4281_479e_9e30_febbe6c42751.slice. May 15 05:03:45.508982 kubelet[2648]: I0515 05:03:45.508938 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-host-proc-sys-net\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509120 kubelet[2648]: I0515 05:03:45.509000 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-xtables-lock\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509120 kubelet[2648]: I0515 05:03:45.509020 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-host-proc-sys-kernel\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509293 kubelet[2648]: I0515 05:03:45.509272 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-run\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509293 kubelet[2648]: I0515 05:03:45.509299 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-cgroup\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509384 kubelet[2648]: I0515 05:03:45.509328 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-etc-cni-netd\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509384 kubelet[2648]: I0515 05:03:45.509348 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjrjs\" (UniqueName: \"kubernetes.io/projected/0fedb74b-4281-479e-9e30-febbe6c42751-kube-api-access-gjrjs\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509384 kubelet[2648]: I0515 05:03:45.509366 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-config-path\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509384 kubelet[2648]: I0515 05:03:45.509382 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-hostproc\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509498 kubelet[2648]: I0515 05:03:45.509399 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fedb74b-4281-479e-9e30-febbe6c42751-clustermesh-secrets\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509498 kubelet[2648]: I0515 05:03:45.509416 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fedb74b-4281-479e-9e30-febbe6c42751-hubble-tls\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509498 kubelet[2648]: I0515 05:03:45.509444 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-bpf-maps\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509498 kubelet[2648]: I0515 05:03:45.509461 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cni-path\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.509498 kubelet[2648]: I0515 05:03:45.509477 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-lib-modules\") pod \"cilium-fk5mn\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " pod="kube-system/cilium-fk5mn" May 15 05:03:45.619843 systemd[1]: Created slice kubepods-besteffort-pod81f32847_03ba_451a_a36c_94772ef4219e.slice - libcontainer container kubepods-besteffort-pod81f32847_03ba_451a_a36c_94772ef4219e.slice. May 15 05:03:45.711452 kubelet[2648]: I0515 05:03:45.711169 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmt9\" (UniqueName: \"kubernetes.io/projected/81f32847-03ba-451a-a36c-94772ef4219e-kube-api-access-2dmt9\") pod \"cilium-operator-5d85765b45-8jhwf\" (UID: \"81f32847-03ba-451a-a36c-94772ef4219e\") " pod="kube-system/cilium-operator-5d85765b45-8jhwf" May 15 05:03:45.711784 kubelet[2648]: I0515 05:03:45.711475 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81f32847-03ba-451a-a36c-94772ef4219e-cilium-config-path\") pod \"cilium-operator-5d85765b45-8jhwf\" (UID: \"81f32847-03ba-451a-a36c-94772ef4219e\") " pod="kube-system/cilium-operator-5d85765b45-8jhwf" May 15 05:03:45.736413 containerd[1475]: time="2025-05-15T05:03:45.736305472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ghj8p,Uid:d1e674bb-eee9-4849-b25a-0a1096795f29,Namespace:kube-system,Attempt:0,}" May 15 05:03:45.849989 containerd[1475]: time="2025-05-15T05:03:45.848799806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 05:03:45.849989 containerd[1475]: time="2025-05-15T05:03:45.849737822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 05:03:45.849989 containerd[1475]: time="2025-05-15T05:03:45.849754383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:45.849989 containerd[1475]: time="2025-05-15T05:03:45.849847711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:45.878555 systemd[1]: Started cri-containerd-ea1a79a6ddc5fbc6ba80870289734246617036cf44b27ec362d19ad5d4c80dc4.scope - libcontainer container ea1a79a6ddc5fbc6ba80870289734246617036cf44b27ec362d19ad5d4c80dc4. May 15 05:03:45.913155 containerd[1475]: time="2025-05-15T05:03:45.913085892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ghj8p,Uid:d1e674bb-eee9-4849-b25a-0a1096795f29,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea1a79a6ddc5fbc6ba80870289734246617036cf44b27ec362d19ad5d4c80dc4\"" May 15 05:03:45.917474 containerd[1475]: time="2025-05-15T05:03:45.917424425Z" level=info msg="CreateContainer within sandbox \"ea1a79a6ddc5fbc6ba80870289734246617036cf44b27ec362d19ad5d4c80dc4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 05:03:45.938488 containerd[1475]: time="2025-05-15T05:03:45.938436319Z" level=info msg="CreateContainer within sandbox \"ea1a79a6ddc5fbc6ba80870289734246617036cf44b27ec362d19ad5d4c80dc4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6ebb1bf5946c09976eec127d8cd70b98ba4db2256b269a2bd951980dcdfedaa1\"" May 15 05:03:45.939333 containerd[1475]: time="2025-05-15T05:03:45.939197379Z" level=info msg="StartContainer for \"6ebb1bf5946c09976eec127d8cd70b98ba4db2256b269a2bd951980dcdfedaa1\"" May 15 05:03:45.972515 systemd[1]: Started cri-containerd-6ebb1bf5946c09976eec127d8cd70b98ba4db2256b269a2bd951980dcdfedaa1.scope - libcontainer container 6ebb1bf5946c09976eec127d8cd70b98ba4db2256b269a2bd951980dcdfedaa1. May 15 05:03:46.007128 containerd[1475]: time="2025-05-15T05:03:46.006925419Z" level=info msg="StartContainer for \"6ebb1bf5946c09976eec127d8cd70b98ba4db2256b269a2bd951980dcdfedaa1\" returns successfully" May 15 05:03:46.550777 kubelet[2648]: I0515 05:03:46.550437 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ghj8p" podStartSLOduration=1.550419615 podStartE2EDuration="1.550419615s" podCreationTimestamp="2025-05-15 05:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 05:03:46.549177622 +0000 UTC m=+8.199763446" watchObservedRunningTime="2025-05-15 05:03:46.550419615 +0000 UTC m=+8.201005449" May 15 05:03:46.611294 kubelet[2648]: E0515 05:03:46.611211 2648 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 15 05:03:46.611294 kubelet[2648]: E0515 05:03:46.611293 2648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-config-path podName:0fedb74b-4281-479e-9e30-febbe6c42751 nodeName:}" failed. No retries permitted until 2025-05-15 05:03:47.111272553 +0000 UTC m=+8.761858377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-config-path") pod "cilium-fk5mn" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751") : failed to sync configmap cache: timed out waiting for the condition May 15 05:03:46.827769 containerd[1475]: time="2025-05-15T05:03:46.827306276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8jhwf,Uid:81f32847-03ba-451a-a36c-94772ef4219e,Namespace:kube-system,Attempt:0,}" May 15 05:03:46.907840 containerd[1475]: time="2025-05-15T05:03:46.907657833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 05:03:46.907840 containerd[1475]: time="2025-05-15T05:03:46.907797077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 05:03:46.908356 containerd[1475]: time="2025-05-15T05:03:46.907831713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:46.908356 containerd[1475]: time="2025-05-15T05:03:46.908018999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:46.950505 systemd[1]: Started cri-containerd-0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec.scope - libcontainer container 0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec. May 15 05:03:47.002905 containerd[1475]: time="2025-05-15T05:03:47.002443538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8jhwf,Uid:81f32847-03ba-451a-a36c-94772ef4219e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\"" May 15 05:03:47.005309 containerd[1475]: time="2025-05-15T05:03:47.005130375Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 05:03:47.245977 containerd[1475]: time="2025-05-15T05:03:47.245620692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fk5mn,Uid:0fedb74b-4281-479e-9e30-febbe6c42751,Namespace:kube-system,Attempt:0,}" May 15 05:03:47.302151 containerd[1475]: time="2025-05-15T05:03:47.301899473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 05:03:47.302151 containerd[1475]: time="2025-05-15T05:03:47.302029780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 05:03:47.302151 containerd[1475]: time="2025-05-15T05:03:47.302067542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:47.302864 containerd[1475]: time="2025-05-15T05:03:47.302370768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:03:47.342738 systemd[1]: Started cri-containerd-d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1.scope - libcontainer container d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1. May 15 05:03:47.406002 containerd[1475]: time="2025-05-15T05:03:47.405816869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fk5mn,Uid:0fedb74b-4281-479e-9e30-febbe6c42751,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\"" May 15 05:03:49.165625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2148236768.mount: Deactivated successfully. May 15 05:03:49.972005 containerd[1475]: time="2025-05-15T05:03:49.971950478Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:49.973833 containerd[1475]: time="2025-05-15T05:03:49.973774388Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 05:03:49.975351 containerd[1475]: time="2025-05-15T05:03:49.975273835Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:49.976968 containerd[1475]: time="2025-05-15T05:03:49.976840438Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.971031132s" May 15 05:03:49.976968 containerd[1475]: time="2025-05-15T05:03:49.976871377Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 05:03:49.979774 containerd[1475]: time="2025-05-15T05:03:49.979549990Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 05:03:49.980203 containerd[1475]: time="2025-05-15T05:03:49.980115994Z" level=info msg="CreateContainer within sandbox \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 05:03:50.017637 containerd[1475]: time="2025-05-15T05:03:50.017569571Z" level=info msg="CreateContainer within sandbox \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\"" May 15 05:03:50.018293 containerd[1475]: time="2025-05-15T05:03:50.018229653Z" level=info msg="StartContainer for \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\"" May 15 05:03:50.046577 systemd[1]: Started cri-containerd-9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9.scope - libcontainer container 9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9. May 15 05:03:50.076908 containerd[1475]: time="2025-05-15T05:03:50.076858729Z" level=info msg="StartContainer for \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\" returns successfully" May 15 05:03:50.947561 kubelet[2648]: I0515 05:03:50.947412 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8jhwf" podStartSLOduration=2.973374444 podStartE2EDuration="5.947394244s" podCreationTimestamp="2025-05-15 05:03:45 +0000 UTC" firstStartedPulling="2025-05-15 05:03:47.003668006 +0000 UTC m=+8.654253830" lastFinishedPulling="2025-05-15 05:03:49.977687806 +0000 UTC m=+11.628273630" observedRunningTime="2025-05-15 05:03:50.659875215 +0000 UTC m=+12.310461039" watchObservedRunningTime="2025-05-15 05:03:50.947394244 +0000 UTC m=+12.597980068" May 15 05:03:56.578330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444064333.mount: Deactivated successfully. May 15 05:03:59.616871 containerd[1475]: time="2025-05-15T05:03:59.616725488Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:59.619074 containerd[1475]: time="2025-05-15T05:03:59.618995422Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 05:03:59.620309 containerd[1475]: time="2025-05-15T05:03:59.620168535Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 05:03:59.622194 containerd[1475]: time="2025-05-15T05:03:59.621887941Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.642298917s" May 15 05:03:59.622194 containerd[1475]: time="2025-05-15T05:03:59.621925572Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 05:03:59.625959 containerd[1475]: time="2025-05-15T05:03:59.625894652Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 05:03:59.649617 containerd[1475]: time="2025-05-15T05:03:59.649553110Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53\"" May 15 05:03:59.650120 containerd[1475]: time="2025-05-15T05:03:59.650066269Z" level=info msg="StartContainer for \"a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53\"" May 15 05:03:59.707816 systemd[1]: run-containerd-runc-k8s.io-a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53-runc.45cjWr.mount: Deactivated successfully. May 15 05:03:59.714492 systemd[1]: Started cri-containerd-a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53.scope - libcontainer container a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53. May 15 05:03:59.774563 containerd[1475]: time="2025-05-15T05:03:59.774528916Z" level=info msg="StartContainer for \"a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53\" returns successfully" May 15 05:03:59.780305 systemd[1]: cri-containerd-a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53.scope: Deactivated successfully. May 15 05:04:00.644364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53-rootfs.mount: Deactivated successfully. May 15 05:04:01.395730 containerd[1475]: time="2025-05-15T05:04:01.395565566Z" level=info msg="shim disconnected" id=a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53 namespace=k8s.io May 15 05:04:01.396981 containerd[1475]: time="2025-05-15T05:04:01.396114652Z" level=warning msg="cleaning up after shim disconnected" id=a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53 namespace=k8s.io May 15 05:04:01.396981 containerd[1475]: time="2025-05-15T05:04:01.396153506Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:04:01.630664 containerd[1475]: time="2025-05-15T05:04:01.630283124Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 05:04:01.667351 containerd[1475]: time="2025-05-15T05:04:01.666434569Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3\"" May 15 05:04:01.669088 containerd[1475]: time="2025-05-15T05:04:01.668955792Z" level=info msg="StartContainer for \"c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3\"" May 15 05:04:01.670494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714628956.mount: Deactivated successfully. May 15 05:04:01.760467 systemd[1]: Started cri-containerd-c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3.scope - libcontainer container c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3. May 15 05:04:01.790538 containerd[1475]: time="2025-05-15T05:04:01.790478419Z" level=info msg="StartContainer for \"c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3\" returns successfully" May 15 05:04:01.799516 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 05:04:01.800216 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 05:04:01.800479 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 05:04:01.805842 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 05:04:01.806073 systemd[1]: cri-containerd-c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3.scope: Deactivated successfully. May 15 05:04:01.825035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3-rootfs.mount: Deactivated successfully. May 15 05:04:01.832742 containerd[1475]: time="2025-05-15T05:04:01.832687286Z" level=info msg="shim disconnected" id=c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3 namespace=k8s.io May 15 05:04:01.832742 containerd[1475]: time="2025-05-15T05:04:01.832735928Z" level=warning msg="cleaning up after shim disconnected" id=c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3 namespace=k8s.io May 15 05:04:01.832742 containerd[1475]: time="2025-05-15T05:04:01.832745836Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:04:01.835338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 05:04:02.638208 containerd[1475]: time="2025-05-15T05:04:02.637826060Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 05:04:02.699645 containerd[1475]: time="2025-05-15T05:04:02.698148847Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d\"" May 15 05:04:02.700745 containerd[1475]: time="2025-05-15T05:04:02.700619865Z" level=info msg="StartContainer for \"dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d\"" May 15 05:04:02.700890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291553057.mount: Deactivated successfully. May 15 05:04:02.740906 systemd[1]: run-containerd-runc-k8s.io-dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d-runc.5QQL7W.mount: Deactivated successfully. May 15 05:04:02.750568 systemd[1]: Started cri-containerd-dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d.scope - libcontainer container dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d. May 15 05:04:02.786941 containerd[1475]: time="2025-05-15T05:04:02.786763749Z" level=info msg="StartContainer for \"dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d\" returns successfully" May 15 05:04:02.788100 systemd[1]: cri-containerd-dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d.scope: Deactivated successfully. May 15 05:04:02.831627 containerd[1475]: time="2025-05-15T05:04:02.831496116Z" level=info msg="shim disconnected" id=dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d namespace=k8s.io May 15 05:04:02.831627 containerd[1475]: time="2025-05-15T05:04:02.831590304Z" level=warning msg="cleaning up after shim disconnected" id=dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d namespace=k8s.io May 15 05:04:02.831627 containerd[1475]: time="2025-05-15T05:04:02.831605602Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:04:03.649104 containerd[1475]: time="2025-05-15T05:04:03.648736436Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 05:04:03.684712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d-rootfs.mount: Deactivated successfully. May 15 05:04:03.686535 containerd[1475]: time="2025-05-15T05:04:03.686001824Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58\"" May 15 05:04:03.690873 containerd[1475]: time="2025-05-15T05:04:03.688964227Z" level=info msg="StartContainer for \"6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58\"" May 15 05:04:03.749529 systemd[1]: Started cri-containerd-6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58.scope - libcontainer container 6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58. May 15 05:04:03.776574 systemd[1]: cri-containerd-6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58.scope: Deactivated successfully. May 15 05:04:03.781370 containerd[1475]: time="2025-05-15T05:04:03.781342969Z" level=info msg="StartContainer for \"6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58\" returns successfully" May 15 05:04:03.801474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58-rootfs.mount: Deactivated successfully. May 15 05:04:03.813626 containerd[1475]: time="2025-05-15T05:04:03.813486825Z" level=info msg="shim disconnected" id=6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58 namespace=k8s.io May 15 05:04:03.813626 containerd[1475]: time="2025-05-15T05:04:03.813562016Z" level=warning msg="cleaning up after shim disconnected" id=6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58 namespace=k8s.io May 15 05:04:03.813626 containerd[1475]: time="2025-05-15T05:04:03.813577987Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:04:04.663166 containerd[1475]: time="2025-05-15T05:04:04.662679540Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 05:04:04.728547 containerd[1475]: time="2025-05-15T05:04:04.728492200Z" level=info msg="CreateContainer within sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f\"" May 15 05:04:04.729283 containerd[1475]: time="2025-05-15T05:04:04.729243656Z" level=info msg="StartContainer for \"4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f\"" May 15 05:04:04.762294 systemd[1]: run-containerd-runc-k8s.io-4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f-runc.GRNs3N.mount: Deactivated successfully. May 15 05:04:04.773620 systemd[1]: Started cri-containerd-4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f.scope - libcontainer container 4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f. May 15 05:04:04.815836 containerd[1475]: time="2025-05-15T05:04:04.815782323Z" level=info msg="StartContainer for \"4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f\" returns successfully" May 15 05:04:04.908468 kubelet[2648]: I0515 05:04:04.906951 2648 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 05:04:04.947396 systemd[1]: Created slice kubepods-burstable-pod7104df07_f5e8_4125_aa47_53f7e3de4cec.slice - libcontainer container kubepods-burstable-pod7104df07_f5e8_4125_aa47_53f7e3de4cec.slice. May 15 05:04:04.957844 systemd[1]: Created slice kubepods-burstable-podd8661aa3_ee39_47ff_b949_4c5dfdaef7c4.slice - libcontainer container kubepods-burstable-podd8661aa3_ee39_47ff_b949_4c5dfdaef7c4.slice. May 15 05:04:04.983273 kubelet[2648]: I0515 05:04:04.983239 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7104df07-f5e8-4125-aa47-53f7e3de4cec-config-volume\") pod \"coredns-6f6b679f8f-mzmhm\" (UID: \"7104df07-f5e8-4125-aa47-53f7e3de4cec\") " pod="kube-system/coredns-6f6b679f8f-mzmhm" May 15 05:04:04.983273 kubelet[2648]: I0515 05:04:04.983278 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rp5\" (UniqueName: \"kubernetes.io/projected/7104df07-f5e8-4125-aa47-53f7e3de4cec-kube-api-access-92rp5\") pod \"coredns-6f6b679f8f-mzmhm\" (UID: \"7104df07-f5e8-4125-aa47-53f7e3de4cec\") " pod="kube-system/coredns-6f6b679f8f-mzmhm" May 15 05:04:04.983477 kubelet[2648]: I0515 05:04:04.983300 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7fn\" (UniqueName: \"kubernetes.io/projected/d8661aa3-ee39-47ff-b949-4c5dfdaef7c4-kube-api-access-9t7fn\") pod \"coredns-6f6b679f8f-45jw6\" (UID: \"d8661aa3-ee39-47ff-b949-4c5dfdaef7c4\") " pod="kube-system/coredns-6f6b679f8f-45jw6" May 15 05:04:04.983477 kubelet[2648]: I0515 05:04:04.983356 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8661aa3-ee39-47ff-b949-4c5dfdaef7c4-config-volume\") pod \"coredns-6f6b679f8f-45jw6\" (UID: \"d8661aa3-ee39-47ff-b949-4c5dfdaef7c4\") " pod="kube-system/coredns-6f6b679f8f-45jw6" May 15 05:04:05.257124 containerd[1475]: time="2025-05-15T05:04:05.257082076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mzmhm,Uid:7104df07-f5e8-4125-aa47-53f7e3de4cec,Namespace:kube-system,Attempt:0,}" May 15 05:04:05.261660 containerd[1475]: time="2025-05-15T05:04:05.260498741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-45jw6,Uid:d8661aa3-ee39-47ff-b949-4c5dfdaef7c4,Namespace:kube-system,Attempt:0,}" May 15 05:04:05.760136 kubelet[2648]: I0515 05:04:05.759961 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fk5mn" podStartSLOduration=8.542840099 podStartE2EDuration="20.759930034s" podCreationTimestamp="2025-05-15 05:03:45 +0000 UTC" firstStartedPulling="2025-05-15 05:03:47.407284809 +0000 UTC m=+9.057870633" lastFinishedPulling="2025-05-15 05:03:59.624374744 +0000 UTC m=+21.274960568" observedRunningTime="2025-05-15 05:04:05.75631231 +0000 UTC m=+27.406898204" watchObservedRunningTime="2025-05-15 05:04:05.759930034 +0000 UTC m=+27.410515908" May 15 05:04:07.034353 systemd-networkd[1371]: cilium_host: Link UP May 15 05:04:07.035224 systemd-networkd[1371]: cilium_net: Link UP May 15 05:04:07.036987 systemd-networkd[1371]: cilium_net: Gained carrier May 15 05:04:07.038814 systemd-networkd[1371]: cilium_host: Gained carrier May 15 05:04:07.144983 systemd-networkd[1371]: cilium_vxlan: Link UP May 15 05:04:07.144995 systemd-networkd[1371]: cilium_vxlan: Gained carrier May 15 05:04:07.190569 systemd-networkd[1371]: cilium_net: Gained IPv6LL May 15 05:04:07.511701 kernel: NET: Registered PF_ALG protocol family May 15 05:04:07.630687 systemd-networkd[1371]: cilium_host: Gained IPv6LL May 15 05:04:08.395290 systemd-networkd[1371]: lxc_health: Link UP May 15 05:04:08.423109 systemd-networkd[1371]: lxc_health: Gained carrier May 15 05:04:08.869968 systemd-networkd[1371]: lxcb6693e60c49e: Link UP May 15 05:04:08.880029 kernel: eth0: renamed from tmp746a5 May 15 05:04:08.893204 systemd-networkd[1371]: lxc5564c295df1d: Link UP May 15 05:04:08.901494 kernel: eth0: renamed from tmp7f08b May 15 05:04:08.908667 systemd-networkd[1371]: lxc5564c295df1d: Gained carrier May 15 05:04:08.912434 systemd-networkd[1371]: lxcb6693e60c49e: Gained carrier May 15 05:04:09.166465 systemd-networkd[1371]: cilium_vxlan: Gained IPv6LL May 15 05:04:09.745727 systemd-networkd[1371]: lxc_health: Gained IPv6LL May 15 05:04:10.254545 systemd-networkd[1371]: lxcb6693e60c49e: Gained IPv6LL May 15 05:04:10.958578 systemd-networkd[1371]: lxc5564c295df1d: Gained IPv6LL May 15 05:04:13.767348 containerd[1475]: time="2025-05-15T05:04:13.766042541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 05:04:13.767348 containerd[1475]: time="2025-05-15T05:04:13.766152677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 05:04:13.767348 containerd[1475]: time="2025-05-15T05:04:13.766172264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:04:13.767348 containerd[1475]: time="2025-05-15T05:04:13.766300776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:04:13.816637 systemd[1]: Started cri-containerd-7f08bf997d94e9e7425336208686cc66648740ae50c430695fe688e24e64b649.scope - libcontainer container 7f08bf997d94e9e7425336208686cc66648740ae50c430695fe688e24e64b649. May 15 05:04:13.859132 containerd[1475]: time="2025-05-15T05:04:13.858827790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 05:04:13.859132 containerd[1475]: time="2025-05-15T05:04:13.858902570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 05:04:13.859132 containerd[1475]: time="2025-05-15T05:04:13.858924691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:04:13.860138 containerd[1475]: time="2025-05-15T05:04:13.859954448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:04:13.904664 systemd[1]: Started cri-containerd-746a53fa6319051b95eba7bcf9b293268864f4c6d30c688e4add6a13107fc1ad.scope - libcontainer container 746a53fa6319051b95eba7bcf9b293268864f4c6d30c688e4add6a13107fc1ad. May 15 05:04:13.944871 containerd[1475]: time="2025-05-15T05:04:13.944815747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-45jw6,Uid:d8661aa3-ee39-47ff-b949-4c5dfdaef7c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f08bf997d94e9e7425336208686cc66648740ae50c430695fe688e24e64b649\"" May 15 05:04:13.951534 containerd[1475]: time="2025-05-15T05:04:13.951495066Z" level=info msg="CreateContainer within sandbox \"7f08bf997d94e9e7425336208686cc66648740ae50c430695fe688e24e64b649\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 05:04:13.988884 containerd[1475]: time="2025-05-15T05:04:13.988832279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mzmhm,Uid:7104df07-f5e8-4125-aa47-53f7e3de4cec,Namespace:kube-system,Attempt:0,} returns sandbox id \"746a53fa6319051b95eba7bcf9b293268864f4c6d30c688e4add6a13107fc1ad\"" May 15 05:04:13.993027 containerd[1475]: time="2025-05-15T05:04:13.992988494Z" level=info msg="CreateContainer within sandbox \"746a53fa6319051b95eba7bcf9b293268864f4c6d30c688e4add6a13107fc1ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 05:04:14.164026 containerd[1475]: time="2025-05-15T05:04:14.163412324Z" level=info msg="CreateContainer within sandbox \"7f08bf997d94e9e7425336208686cc66648740ae50c430695fe688e24e64b649\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26c07c19a9629382c24270bdc29d91150caff1e14c71ed99074fd69ef9c31925\"" May 15 05:04:14.166977 containerd[1475]: time="2025-05-15T05:04:14.165903177Z" level=info msg="StartContainer for \"26c07c19a9629382c24270bdc29d91150caff1e14c71ed99074fd69ef9c31925\"" May 15 05:04:14.185418 containerd[1475]: time="2025-05-15T05:04:14.185297647Z" level=info msg="CreateContainer within sandbox \"746a53fa6319051b95eba7bcf9b293268864f4c6d30c688e4add6a13107fc1ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5648e0b7367fa6619d94b87be3f7a89d655640ff0d00a95bba93ecf802fad8ec\"" May 15 05:04:14.187884 containerd[1475]: time="2025-05-15T05:04:14.187436198Z" level=info msg="StartContainer for \"5648e0b7367fa6619d94b87be3f7a89d655640ff0d00a95bba93ecf802fad8ec\"" May 15 05:04:14.246476 systemd[1]: Started cri-containerd-26c07c19a9629382c24270bdc29d91150caff1e14c71ed99074fd69ef9c31925.scope - libcontainer container 26c07c19a9629382c24270bdc29d91150caff1e14c71ed99074fd69ef9c31925. May 15 05:04:14.250548 systemd[1]: Started cri-containerd-5648e0b7367fa6619d94b87be3f7a89d655640ff0d00a95bba93ecf802fad8ec.scope - libcontainer container 5648e0b7367fa6619d94b87be3f7a89d655640ff0d00a95bba93ecf802fad8ec. May 15 05:04:14.292294 containerd[1475]: time="2025-05-15T05:04:14.292254631Z" level=info msg="StartContainer for \"26c07c19a9629382c24270bdc29d91150caff1e14c71ed99074fd69ef9c31925\" returns successfully" May 15 05:04:14.314750 containerd[1475]: time="2025-05-15T05:04:14.314297540Z" level=info msg="StartContainer for \"5648e0b7367fa6619d94b87be3f7a89d655640ff0d00a95bba93ecf802fad8ec\" returns successfully" May 15 05:04:14.751718 kubelet[2648]: I0515 05:04:14.751558 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-45jw6" podStartSLOduration=29.751502472 podStartE2EDuration="29.751502472s" podCreationTimestamp="2025-05-15 05:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 05:04:14.748899889 +0000 UTC m=+36.399485763" watchObservedRunningTime="2025-05-15 05:04:14.751502472 +0000 UTC m=+36.402088346" May 15 05:04:14.784907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211936560.mount: Deactivated successfully. May 15 05:04:14.852708 kubelet[2648]: I0515 05:04:14.852093 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mzmhm" podStartSLOduration=29.852069994 podStartE2EDuration="29.852069994s" podCreationTimestamp="2025-05-15 05:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 05:04:14.813163131 +0000 UTC m=+36.463749005" watchObservedRunningTime="2025-05-15 05:04:14.852069994 +0000 UTC m=+36.502655818" May 15 05:06:50.204534 update_engine[1441]: I20250515 05:06:50.203997 1441 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 05:06:50.209557 update_engine[1441]: I20250515 05:06:50.205152 1441 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 05:06:50.209557 update_engine[1441]: I20250515 05:06:50.206665 1441 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 05:06:50.211268 update_engine[1441]: I20250515 05:06:50.211149 1441 omaha_request_params.cc:62] Current group set to stable May 15 05:06:50.212427 update_engine[1441]: I20250515 05:06:50.212289 1441 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 05:06:50.212427 update_engine[1441]: I20250515 05:06:50.212379 1441 update_attempter.cc:643] Scheduling an action processor start. May 15 05:06:50.212631 update_engine[1441]: I20250515 05:06:50.212464 1441 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 05:06:50.212713 update_engine[1441]: I20250515 05:06:50.212669 1441 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 05:06:50.214178 update_engine[1441]: I20250515 05:06:50.212883 1441 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 05:06:50.214178 update_engine[1441]: I20250515 05:06:50.212920 1441 omaha_request_action.cc:272] Request: May 15 05:06:50.214178 update_engine[1441]: May 15 05:06:50.214178 update_engine[1441]: May 15 05:06:50.214178 update_engine[1441]: May 15 05:06:50.214178 update_engine[1441]: May 15 05:06:50.214178 update_engine[1441]: May 15 05:06:50.214178 update_engine[1441]: May 15 05:06:50.214178 update_engine[1441]: May 15 05:06:50.214178 update_engine[1441]: May 15 05:06:50.214178 update_engine[1441]: I20250515 05:06:50.212950 1441 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 05:06:50.218749 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 05:06:50.221142 update_engine[1441]: I20250515 05:06:50.221071 1441 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 05:06:50.222491 update_engine[1441]: I20250515 05:06:50.222368 1441 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 05:06:50.230047 update_engine[1441]: E20250515 05:06:50.229907 1441 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 05:06:50.230282 update_engine[1441]: I20250515 05:06:50.230124 1441 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 05:07:00.112467 update_engine[1441]: I20250515 05:07:00.112185 1441 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 05:07:00.114480 update_engine[1441]: I20250515 05:07:00.113965 1441 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 05:07:00.114665 update_engine[1441]: I20250515 05:07:00.114548 1441 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 05:07:00.119931 update_engine[1441]: E20250515 05:07:00.119804 1441 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 05:07:00.119931 update_engine[1441]: I20250515 05:07:00.119928 1441 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 05:07:10.103871 update_engine[1441]: I20250515 05:07:10.103091 1441 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 05:07:10.103871 update_engine[1441]: I20250515 05:07:10.103670 1441 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 05:07:10.104741 update_engine[1441]: I20250515 05:07:10.104202 1441 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 05:07:10.109795 update_engine[1441]: E20250515 05:07:10.109696 1441 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 05:07:10.109966 update_engine[1441]: I20250515 05:07:10.109819 1441 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 05:07:20.111893 update_engine[1441]: I20250515 05:07:20.111768 1441 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 05:07:20.112714 update_engine[1441]: I20250515 05:07:20.112110 1441 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 05:07:20.112714 update_engine[1441]: I20250515 05:07:20.112444 1441 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 05:07:20.117800 update_engine[1441]: E20250515 05:07:20.117743 1441 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 05:07:20.117986 update_engine[1441]: I20250515 05:07:20.117815 1441 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 05:07:20.117986 update_engine[1441]: I20250515 05:07:20.117839 1441 omaha_request_action.cc:617] Omaha request response: May 15 05:07:20.117986 update_engine[1441]: E20250515 05:07:20.117962 1441 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 05:07:20.118242 update_engine[1441]: I20250515 05:07:20.118010 1441 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 05:07:20.118242 update_engine[1441]: I20250515 05:07:20.118023 1441 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 05:07:20.118242 update_engine[1441]: I20250515 05:07:20.118031 1441 update_attempter.cc:306] Processing Done. May 15 05:07:20.118242 update_engine[1441]: E20250515 05:07:20.118067 1441 update_attempter.cc:619] Update failed. May 15 05:07:20.118242 update_engine[1441]: I20250515 05:07:20.118085 1441 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 05:07:20.118242 update_engine[1441]: I20250515 05:07:20.118096 1441 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 05:07:20.118242 update_engine[1441]: I20250515 05:07:20.118105 1441 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 05:07:20.118242 update_engine[1441]: I20250515 05:07:20.118207 1441 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 05:07:20.118242 update_engine[1441]: I20250515 05:07:20.118242 1441 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 05:07:20.118920 update_engine[1441]: I20250515 05:07:20.118252 1441 omaha_request_action.cc:272] Request: May 15 05:07:20.118920 update_engine[1441]: May 15 05:07:20.118920 update_engine[1441]: May 15 05:07:20.118920 update_engine[1441]: May 15 05:07:20.118920 update_engine[1441]: May 15 05:07:20.118920 update_engine[1441]: May 15 05:07:20.118920 update_engine[1441]: May 15 05:07:20.118920 update_engine[1441]: I20250515 05:07:20.118263 1441 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 05:07:20.118920 update_engine[1441]: I20250515 05:07:20.118480 1441 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 05:07:20.118920 update_engine[1441]: I20250515 05:07:20.118753 1441 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 05:07:20.120091 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 05:07:20.123964 update_engine[1441]: E20250515 05:07:20.123904 1441 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 05:07:20.124131 update_engine[1441]: I20250515 05:07:20.123976 1441 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 05:07:20.124131 update_engine[1441]: I20250515 05:07:20.123991 1441 omaha_request_action.cc:617] Omaha request response: May 15 05:07:20.124131 update_engine[1441]: I20250515 05:07:20.124001 1441 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 05:07:20.124131 update_engine[1441]: I20250515 05:07:20.124009 1441 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 05:07:20.124131 update_engine[1441]: I20250515 05:07:20.124017 1441 update_attempter.cc:306] Processing Done. May 15 05:07:20.124131 update_engine[1441]: I20250515 05:07:20.124024 1441 update_attempter.cc:310] Error event sent. May 15 05:07:20.124131 update_engine[1441]: I20250515 05:07:20.124045 1441 update_check_scheduler.cc:74] Next update check in 42m47s May 15 05:07:20.125582 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 05:08:10.725223 systemd[1]: Started sshd@9-172.24.4.5:22-172.24.4.1:55958.service - OpenSSH per-connection server daemon (172.24.4.1:55958). May 15 05:08:11.803266 sshd[4062]: Accepted publickey for core from 172.24.4.1 port 55958 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:11.809306 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:11.829275 systemd-logind[1440]: New session 12 of user core. May 15 05:08:11.835066 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 05:08:12.600828 sshd[4064]: Connection closed by 172.24.4.1 port 55958 May 15 05:08:12.601941 sshd-session[4062]: pam_unix(sshd:session): session closed for user core May 15 05:08:12.610690 systemd[1]: sshd@9-172.24.4.5:22-172.24.4.1:55958.service: Deactivated successfully. May 15 05:08:12.618602 systemd[1]: session-12.scope: Deactivated successfully. May 15 05:08:12.626217 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. May 15 05:08:12.630023 systemd-logind[1440]: Removed session 12. May 15 05:08:17.640239 systemd[1]: Started sshd@10-172.24.4.5:22-172.24.4.1:41736.service - OpenSSH per-connection server daemon (172.24.4.1:41736). May 15 05:08:19.004410 sshd[4078]: Accepted publickey for core from 172.24.4.1 port 41736 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:19.007108 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:19.021462 systemd-logind[1440]: New session 13 of user core. May 15 05:08:19.031744 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 05:08:19.722822 sshd[4080]: Connection closed by 172.24.4.1 port 41736 May 15 05:08:19.724541 sshd-session[4078]: pam_unix(sshd:session): session closed for user core May 15 05:08:19.736148 systemd[1]: sshd@10-172.24.4.5:22-172.24.4.1:41736.service: Deactivated successfully. May 15 05:08:19.741220 systemd[1]: session-13.scope: Deactivated successfully. May 15 05:08:19.744205 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. May 15 05:08:19.747615 systemd-logind[1440]: Removed session 13. May 15 05:08:24.736693 systemd[1]: Started sshd@11-172.24.4.5:22-172.24.4.1:34042.service - OpenSSH per-connection server daemon (172.24.4.1:34042). May 15 05:08:25.980419 sshd[4092]: Accepted publickey for core from 172.24.4.1 port 34042 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:25.983989 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:25.999775 systemd-logind[1440]: New session 14 of user core. May 15 05:08:26.009061 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 05:08:26.789853 sshd[4094]: Connection closed by 172.24.4.1 port 34042 May 15 05:08:26.791450 sshd-session[4092]: pam_unix(sshd:session): session closed for user core May 15 05:08:26.801704 systemd[1]: sshd@11-172.24.4.5:22-172.24.4.1:34042.service: Deactivated successfully. May 15 05:08:26.808092 systemd[1]: session-14.scope: Deactivated successfully. May 15 05:08:26.810555 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. May 15 05:08:26.813298 systemd-logind[1440]: Removed session 14. May 15 05:08:31.824171 systemd[1]: Started sshd@12-172.24.4.5:22-172.24.4.1:34058.service - OpenSSH per-connection server daemon (172.24.4.1:34058). May 15 05:08:33.256445 sshd[4105]: Accepted publickey for core from 172.24.4.1 port 34058 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:33.259635 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:33.283944 systemd-logind[1440]: New session 15 of user core. May 15 05:08:33.293975 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 05:08:33.863134 sshd[4108]: Connection closed by 172.24.4.1 port 34058 May 15 05:08:33.866189 sshd-session[4105]: pam_unix(sshd:session): session closed for user core May 15 05:08:33.874121 systemd[1]: sshd@12-172.24.4.5:22-172.24.4.1:34058.service: Deactivated successfully. May 15 05:08:33.877971 systemd[1]: session-15.scope: Deactivated successfully. May 15 05:08:33.880689 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. May 15 05:08:33.890697 systemd[1]: Started sshd@13-172.24.4.5:22-172.24.4.1:57076.service - OpenSSH per-connection server daemon (172.24.4.1:57076). May 15 05:08:33.893044 systemd-logind[1440]: Removed session 15. May 15 05:08:35.174758 sshd[4120]: Accepted publickey for core from 172.24.4.1 port 57076 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:35.177846 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:35.189429 systemd-logind[1440]: New session 16 of user core. May 15 05:08:35.199490 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 05:08:35.988365 sshd[4122]: Connection closed by 172.24.4.1 port 57076 May 15 05:08:35.988120 sshd-session[4120]: pam_unix(sshd:session): session closed for user core May 15 05:08:36.007704 systemd[1]: sshd@13-172.24.4.5:22-172.24.4.1:57076.service: Deactivated successfully. May 15 05:08:36.012107 systemd[1]: session-16.scope: Deactivated successfully. May 15 05:08:36.016135 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. May 15 05:08:36.026913 systemd[1]: Started sshd@14-172.24.4.5:22-172.24.4.1:57086.service - OpenSSH per-connection server daemon (172.24.4.1:57086). May 15 05:08:36.029644 systemd-logind[1440]: Removed session 16. May 15 05:08:37.210730 sshd[4131]: Accepted publickey for core from 172.24.4.1 port 57086 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:37.211511 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:37.223448 systemd-logind[1440]: New session 17 of user core. May 15 05:08:37.230553 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 05:08:37.896394 sshd[4134]: Connection closed by 172.24.4.1 port 57086 May 15 05:08:37.898716 sshd-session[4131]: pam_unix(sshd:session): session closed for user core May 15 05:08:37.905181 systemd[1]: sshd@14-172.24.4.5:22-172.24.4.1:57086.service: Deactivated successfully. May 15 05:08:37.910832 systemd[1]: session-17.scope: Deactivated successfully. May 15 05:08:37.915681 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. May 15 05:08:37.918076 systemd-logind[1440]: Removed session 17. May 15 05:08:42.923954 systemd[1]: Started sshd@15-172.24.4.5:22-172.24.4.1:57100.service - OpenSSH per-connection server daemon (172.24.4.1:57100). May 15 05:08:44.032408 sshd[4147]: Accepted publickey for core from 172.24.4.1 port 57100 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:44.055773 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:44.075903 systemd-logind[1440]: New session 18 of user core. May 15 05:08:44.094910 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 05:08:44.657435 sshd[4149]: Connection closed by 172.24.4.1 port 57100 May 15 05:08:44.658833 sshd-session[4147]: pam_unix(sshd:session): session closed for user core May 15 05:08:44.667010 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. May 15 05:08:44.667457 systemd[1]: sshd@15-172.24.4.5:22-172.24.4.1:57100.service: Deactivated successfully. May 15 05:08:44.673805 systemd[1]: session-18.scope: Deactivated successfully. May 15 05:08:44.680883 systemd-logind[1440]: Removed session 18. May 15 05:08:49.689041 systemd[1]: Started sshd@16-172.24.4.5:22-172.24.4.1:43878.service - OpenSSH per-connection server daemon (172.24.4.1:43878). May 15 05:08:50.941258 sshd[4162]: Accepted publickey for core from 172.24.4.1 port 43878 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:50.944223 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:50.954224 systemd-logind[1440]: New session 19 of user core. May 15 05:08:50.967664 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 05:08:51.790651 sshd[4164]: Connection closed by 172.24.4.1 port 43878 May 15 05:08:51.792247 sshd-session[4162]: pam_unix(sshd:session): session closed for user core May 15 05:08:51.821661 systemd[1]: sshd@16-172.24.4.5:22-172.24.4.1:43878.service: Deactivated successfully. May 15 05:08:51.827605 systemd[1]: session-19.scope: Deactivated successfully. May 15 05:08:51.833448 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. May 15 05:08:51.842977 systemd[1]: Started sshd@17-172.24.4.5:22-172.24.4.1:43894.service - OpenSSH per-connection server daemon (172.24.4.1:43894). May 15 05:08:51.846857 systemd-logind[1440]: Removed session 19. May 15 05:08:53.209120 sshd[4175]: Accepted publickey for core from 172.24.4.1 port 43894 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:53.212948 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:53.224609 systemd-logind[1440]: New session 20 of user core. May 15 05:08:53.241817 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 05:08:54.032858 sshd[4177]: Connection closed by 172.24.4.1 port 43894 May 15 05:08:54.034254 sshd-session[4175]: pam_unix(sshd:session): session closed for user core May 15 05:08:54.047632 systemd[1]: sshd@17-172.24.4.5:22-172.24.4.1:43894.service: Deactivated successfully. May 15 05:08:54.056291 systemd[1]: session-20.scope: Deactivated successfully. May 15 05:08:54.062078 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. May 15 05:08:54.072981 systemd[1]: Started sshd@18-172.24.4.5:22-172.24.4.1:41904.service - OpenSSH per-connection server daemon (172.24.4.1:41904). May 15 05:08:54.077892 systemd-logind[1440]: Removed session 20. May 15 05:08:55.208507 sshd[4186]: Accepted publickey for core from 172.24.4.1 port 41904 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:55.211511 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:55.224436 systemd-logind[1440]: New session 21 of user core. May 15 05:08:55.227670 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 05:08:58.326374 sshd[4188]: Connection closed by 172.24.4.1 port 41904 May 15 05:08:58.327678 sshd-session[4186]: pam_unix(sshd:session): session closed for user core May 15 05:08:58.348094 systemd[1]: sshd@18-172.24.4.5:22-172.24.4.1:41904.service: Deactivated successfully. May 15 05:08:58.354989 systemd[1]: session-21.scope: Deactivated successfully. May 15 05:08:58.359106 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. May 15 05:08:58.365970 systemd[1]: Started sshd@19-172.24.4.5:22-172.24.4.1:41908.service - OpenSSH per-connection server daemon (172.24.4.1:41908). May 15 05:08:58.369772 systemd-logind[1440]: Removed session 21. May 15 05:08:59.682281 sshd[4204]: Accepted publickey for core from 172.24.4.1 port 41908 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:08:59.685160 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:08:59.694044 systemd-logind[1440]: New session 22 of user core. May 15 05:08:59.703640 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 05:09:00.678287 sshd[4206]: Connection closed by 172.24.4.1 port 41908 May 15 05:09:00.679048 sshd-session[4204]: pam_unix(sshd:session): session closed for user core May 15 05:09:00.692548 systemd[1]: sshd@19-172.24.4.5:22-172.24.4.1:41908.service: Deactivated successfully. May 15 05:09:00.696250 systemd[1]: session-22.scope: Deactivated successfully. May 15 05:09:00.700668 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. May 15 05:09:00.708957 systemd[1]: Started sshd@20-172.24.4.5:22-172.24.4.1:41918.service - OpenSSH per-connection server daemon (172.24.4.1:41918). May 15 05:09:00.714208 systemd-logind[1440]: Removed session 22. May 15 05:09:01.954439 sshd[4216]: Accepted publickey for core from 172.24.4.1 port 41918 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:09:01.957731 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:09:01.965490 systemd-logind[1440]: New session 23 of user core. May 15 05:09:01.974574 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 05:09:02.691421 sshd[4218]: Connection closed by 172.24.4.1 port 41918 May 15 05:09:02.692089 sshd-session[4216]: pam_unix(sshd:session): session closed for user core May 15 05:09:02.701001 systemd[1]: sshd@20-172.24.4.5:22-172.24.4.1:41918.service: Deactivated successfully. May 15 05:09:02.708023 systemd[1]: session-23.scope: Deactivated successfully. May 15 05:09:02.710764 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. May 15 05:09:02.713833 systemd-logind[1440]: Removed session 23. May 15 05:09:07.721730 systemd[1]: Started sshd@21-172.24.4.5:22-172.24.4.1:45192.service - OpenSSH per-connection server daemon (172.24.4.1:45192). May 15 05:09:08.949500 sshd[4231]: Accepted publickey for core from 172.24.4.1 port 45192 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:09:08.950868 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:09:08.962129 systemd-logind[1440]: New session 24 of user core. May 15 05:09:08.970639 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 05:09:09.695633 sshd[4233]: Connection closed by 172.24.4.1 port 45192 May 15 05:09:09.696490 sshd-session[4231]: pam_unix(sshd:session): session closed for user core May 15 05:09:09.701018 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. May 15 05:09:09.702237 systemd[1]: sshd@21-172.24.4.5:22-172.24.4.1:45192.service: Deactivated successfully. May 15 05:09:09.708080 systemd[1]: session-24.scope: Deactivated successfully. May 15 05:09:09.712842 systemd-logind[1440]: Removed session 24. May 15 05:09:14.719988 systemd[1]: Started sshd@22-172.24.4.5:22-172.24.4.1:50390.service - OpenSSH per-connection server daemon (172.24.4.1:50390). May 15 05:09:15.999878 sshd[4244]: Accepted publickey for core from 172.24.4.1 port 50390 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:09:16.002589 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:09:16.012155 systemd-logind[1440]: New session 25 of user core. May 15 05:09:16.020658 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 05:09:16.814778 sshd[4246]: Connection closed by 172.24.4.1 port 50390 May 15 05:09:16.815750 sshd-session[4244]: pam_unix(sshd:session): session closed for user core May 15 05:09:16.821791 systemd[1]: sshd@22-172.24.4.5:22-172.24.4.1:50390.service: Deactivated successfully. May 15 05:09:16.826963 systemd[1]: session-25.scope: Deactivated successfully. May 15 05:09:16.828719 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. May 15 05:09:16.830997 systemd-logind[1440]: Removed session 25. May 15 05:09:21.845136 systemd[1]: Started sshd@23-172.24.4.5:22-172.24.4.1:50392.service - OpenSSH per-connection server daemon (172.24.4.1:50392). May 15 05:09:23.054078 sshd[4259]: Accepted publickey for core from 172.24.4.1 port 50392 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:09:23.057019 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:09:23.067716 systemd-logind[1440]: New session 26 of user core. May 15 05:09:23.078450 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 05:09:23.792106 sshd[4261]: Connection closed by 172.24.4.1 port 50392 May 15 05:09:23.793242 sshd-session[4259]: pam_unix(sshd:session): session closed for user core May 15 05:09:23.803894 systemd[1]: sshd@23-172.24.4.5:22-172.24.4.1:50392.service: Deactivated successfully. May 15 05:09:23.808251 systemd[1]: session-26.scope: Deactivated successfully. May 15 05:09:23.811799 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. May 15 05:09:23.823008 systemd[1]: Started sshd@24-172.24.4.5:22-172.24.4.1:51188.service - OpenSSH per-connection server daemon (172.24.4.1:51188). May 15 05:09:23.827537 systemd-logind[1440]: Removed session 26. May 15 05:09:25.219462 sshd[4272]: Accepted publickey for core from 172.24.4.1 port 51188 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:09:25.223044 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:09:25.236839 systemd-logind[1440]: New session 27 of user core. May 15 05:09:25.241823 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 05:09:27.658354 containerd[1475]: time="2025-05-15T05:09:27.656905943Z" level=info msg="StopContainer for \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\" with timeout 30 (s)" May 15 05:09:27.661333 containerd[1475]: time="2025-05-15T05:09:27.660928876Z" level=info msg="Stop container \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\" with signal terminated" May 15 05:09:27.696280 systemd[1]: cri-containerd-9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9.scope: Deactivated successfully. May 15 05:09:27.696885 systemd[1]: cri-containerd-9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9.scope: Consumed 1.438s CPU time. May 15 05:09:27.714226 containerd[1475]: time="2025-05-15T05:09:27.713673197Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 05:09:27.727475 containerd[1475]: time="2025-05-15T05:09:27.727287055Z" level=info msg="StopContainer for \"4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f\" with timeout 2 (s)" May 15 05:09:27.727915 containerd[1475]: time="2025-05-15T05:09:27.727892782Z" level=info msg="Stop container \"4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f\" with signal terminated" May 15 05:09:27.748135 systemd-networkd[1371]: lxc_health: Link DOWN May 15 05:09:27.748392 systemd-networkd[1371]: lxc_health: Lost carrier May 15 05:09:27.771553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9-rootfs.mount: Deactivated successfully. May 15 05:09:27.775279 systemd[1]: cri-containerd-4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f.scope: Deactivated successfully. May 15 05:09:27.776232 systemd[1]: cri-containerd-4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f.scope: Consumed 11.358s CPU time. May 15 05:09:27.788370 containerd[1475]: time="2025-05-15T05:09:27.787545593Z" level=info msg="shim disconnected" id=9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9 namespace=k8s.io May 15 05:09:27.788370 containerd[1475]: time="2025-05-15T05:09:27.788366564Z" level=warning msg="cleaning up after shim disconnected" id=9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9 namespace=k8s.io May 15 05:09:27.788687 containerd[1475]: time="2025-05-15T05:09:27.788389026Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:09:27.829289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f-rootfs.mount: Deactivated successfully. May 15 05:09:27.843369 containerd[1475]: time="2025-05-15T05:09:27.843244911Z" level=info msg="shim disconnected" id=4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f namespace=k8s.io May 15 05:09:27.843676 containerd[1475]: time="2025-05-15T05:09:27.843648680Z" level=warning msg="cleaning up after shim disconnected" id=4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f namespace=k8s.io May 15 05:09:27.843810 containerd[1475]: time="2025-05-15T05:09:27.843787320Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:09:27.860769 containerd[1475]: time="2025-05-15T05:09:27.860714489Z" level=info msg="StopContainer for \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\" returns successfully" May 15 05:09:27.862618 containerd[1475]: time="2025-05-15T05:09:27.862180992Z" level=info msg="StopPodSandbox for \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\"" May 15 05:09:27.862618 containerd[1475]: time="2025-05-15T05:09:27.862308732Z" level=info msg="Container to stop \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 05:09:27.865397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec-shm.mount: Deactivated successfully. May 15 05:09:27.870281 containerd[1475]: time="2025-05-15T05:09:27.870218291Z" level=warning msg="cleanup warnings time=\"2025-05-15T05:09:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 05:09:27.877332 systemd[1]: cri-containerd-0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec.scope: Deactivated successfully. May 15 05:09:27.879123 containerd[1475]: time="2025-05-15T05:09:27.878937921Z" level=info msg="StopContainer for \"4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f\" returns successfully" May 15 05:09:27.879693 containerd[1475]: time="2025-05-15T05:09:27.879450763Z" level=info msg="StopPodSandbox for \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\"" May 15 05:09:27.879693 containerd[1475]: time="2025-05-15T05:09:27.879486631Z" level=info msg="Container to stop \"dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 05:09:27.879693 containerd[1475]: time="2025-05-15T05:09:27.879539690Z" level=info msg="Container to stop \"a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 05:09:27.879693 containerd[1475]: time="2025-05-15T05:09:27.879560900Z" level=info msg="Container to stop \"c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 05:09:27.879693 containerd[1475]: time="2025-05-15T05:09:27.879572211Z" level=info msg="Container to stop \"6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 05:09:27.879693 containerd[1475]: time="2025-05-15T05:09:27.879582772Z" level=info msg="Container to stop \"4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 05:09:27.883056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1-shm.mount: Deactivated successfully. May 15 05:09:27.895251 systemd[1]: cri-containerd-d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1.scope: Deactivated successfully. May 15 05:09:27.942046 containerd[1475]: time="2025-05-15T05:09:27.940726421Z" level=info msg="shim disconnected" id=d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1 namespace=k8s.io May 15 05:09:27.942046 containerd[1475]: time="2025-05-15T05:09:27.940804127Z" level=warning msg="cleaning up after shim disconnected" id=d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1 namespace=k8s.io May 15 05:09:27.942046 containerd[1475]: time="2025-05-15T05:09:27.940814096Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:09:27.947781 containerd[1475]: time="2025-05-15T05:09:27.947482435Z" level=info msg="shim disconnected" id=0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec namespace=k8s.io May 15 05:09:27.947781 containerd[1475]: time="2025-05-15T05:09:27.947552786Z" level=warning msg="cleaning up after shim disconnected" id=0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec namespace=k8s.io May 15 05:09:27.947781 containerd[1475]: time="2025-05-15T05:09:27.947563156Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:09:27.980977 containerd[1475]: time="2025-05-15T05:09:27.980891246Z" level=info msg="TearDown network for sandbox \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\" successfully" May 15 05:09:27.981853 containerd[1475]: time="2025-05-15T05:09:27.981699302Z" level=info msg="StopPodSandbox for \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\" returns successfully" May 15 05:09:27.990388 containerd[1475]: time="2025-05-15T05:09:27.990070368Z" level=info msg="TearDown network for sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" successfully" May 15 05:09:27.990388 containerd[1475]: time="2025-05-15T05:09:27.990122917Z" level=info msg="StopPodSandbox for \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" returns successfully" May 15 05:09:28.002036 kubelet[2648]: I0515 05:09:28.001839 2648 scope.go:117] "RemoveContainer" containerID="9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9" May 15 05:09:28.005306 containerd[1475]: time="2025-05-15T05:09:28.004650291Z" level=info msg="RemoveContainer for \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\"" May 15 05:09:28.012158 containerd[1475]: time="2025-05-15T05:09:28.012104414Z" level=info msg="RemoveContainer for \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\" returns successfully" May 15 05:09:28.012811 kubelet[2648]: I0515 05:09:28.012768 2648 scope.go:117] "RemoveContainer" containerID="9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9" May 15 05:09:28.013444 containerd[1475]: time="2025-05-15T05:09:28.013259754Z" level=error msg="ContainerStatus for \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\": not found" May 15 05:09:28.013605 kubelet[2648]: E0515 05:09:28.013549 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\": not found" containerID="9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9" May 15 05:09:28.013755 kubelet[2648]: I0515 05:09:28.013632 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9"} err="failed to get container status \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ec80b84a01aa72ca668246d73749d3fc2d1b2c96498178a2752f60f5e7174d9\": not found" May 15 05:09:28.067332 kubelet[2648]: I0515 05:09:28.067256 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-lib-modules\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067501 kubelet[2648]: I0515 05:09:28.067366 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-hostproc\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067501 kubelet[2648]: I0515 05:09:28.067405 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fedb74b-4281-479e-9e30-febbe6c42751-hubble-tls\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067501 kubelet[2648]: I0515 05:09:28.067423 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cni-path\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067501 kubelet[2648]: I0515 05:09:28.067454 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-config-path\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067632 kubelet[2648]: I0515 05:09:28.067495 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-host-proc-sys-net\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067632 kubelet[2648]: I0515 05:09:28.067525 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-run\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067632 kubelet[2648]: I0515 05:09:28.067551 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-cgroup\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067632 kubelet[2648]: I0515 05:09:28.067581 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-host-proc-sys-kernel\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067632 kubelet[2648]: I0515 05:09:28.067610 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-xtables-lock\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067799 kubelet[2648]: I0515 05:09:28.067645 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjrjs\" (UniqueName: \"kubernetes.io/projected/0fedb74b-4281-479e-9e30-febbe6c42751-kube-api-access-gjrjs\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067799 kubelet[2648]: I0515 05:09:28.067678 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fedb74b-4281-479e-9e30-febbe6c42751-clustermesh-secrets\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067799 kubelet[2648]: I0515 05:09:28.067708 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dmt9\" (UniqueName: \"kubernetes.io/projected/81f32847-03ba-451a-a36c-94772ef4219e-kube-api-access-2dmt9\") pod \"81f32847-03ba-451a-a36c-94772ef4219e\" (UID: \"81f32847-03ba-451a-a36c-94772ef4219e\") " May 15 05:09:28.067799 kubelet[2648]: I0515 05:09:28.067740 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-etc-cni-netd\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.067799 kubelet[2648]: I0515 05:09:28.067769 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81f32847-03ba-451a-a36c-94772ef4219e-cilium-config-path\") pod \"81f32847-03ba-451a-a36c-94772ef4219e\" (UID: \"81f32847-03ba-451a-a36c-94772ef4219e\") " May 15 05:09:28.067799 kubelet[2648]: I0515 05:09:28.067795 2648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-bpf-maps\") pod \"0fedb74b-4281-479e-9e30-febbe6c42751\" (UID: \"0fedb74b-4281-479e-9e30-febbe6c42751\") " May 15 05:09:28.068009 kubelet[2648]: I0515 05:09:28.067948 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.068042 kubelet[2648]: I0515 05:09:28.068018 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.068072 kubelet[2648]: I0515 05:09:28.068046 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-hostproc" (OuterVolumeSpecName: "hostproc") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.072351 kubelet[2648]: I0515 05:09:28.070437 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cni-path" (OuterVolumeSpecName: "cni-path") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.072351 kubelet[2648]: I0515 05:09:28.071514 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.072351 kubelet[2648]: I0515 05:09:28.071574 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.072351 kubelet[2648]: I0515 05:09:28.071598 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.072351 kubelet[2648]: I0515 05:09:28.071649 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.072602 kubelet[2648]: I0515 05:09:28.071684 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.074351 kubelet[2648]: I0515 05:09:28.074016 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fedb74b-4281-479e-9e30-febbe6c42751-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 05:09:28.076254 kubelet[2648]: I0515 05:09:28.076228 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 05:09:28.077193 kubelet[2648]: I0515 05:09:28.077146 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fedb74b-4281-479e-9e30-febbe6c42751-kube-api-access-gjrjs" (OuterVolumeSpecName: "kube-api-access-gjrjs") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "kube-api-access-gjrjs". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 05:09:28.081543 kubelet[2648]: I0515 05:09:28.077581 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 05:09:28.085803 kubelet[2648]: I0515 05:09:28.085766 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f32847-03ba-451a-a36c-94772ef4219e-kube-api-access-2dmt9" (OuterVolumeSpecName: "kube-api-access-2dmt9") pod "81f32847-03ba-451a-a36c-94772ef4219e" (UID: "81f32847-03ba-451a-a36c-94772ef4219e"). InnerVolumeSpecName "kube-api-access-2dmt9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 05:09:28.088898 kubelet[2648]: I0515 05:09:28.088867 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f32847-03ba-451a-a36c-94772ef4219e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "81f32847-03ba-451a-a36c-94772ef4219e" (UID: "81f32847-03ba-451a-a36c-94772ef4219e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 05:09:28.092622 kubelet[2648]: I0515 05:09:28.092566 2648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fedb74b-4281-479e-9e30-febbe6c42751-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0fedb74b-4281-479e-9e30-febbe6c42751" (UID: "0fedb74b-4281-479e-9e30-febbe6c42751"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 05:09:28.168940 kubelet[2648]: I0515 05:09:28.168673 2648 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-host-proc-sys-kernel\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.168940 kubelet[2648]: I0515 05:09:28.168729 2648 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-run\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.168940 kubelet[2648]: I0515 05:09:28.168742 2648 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-cgroup\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.168940 kubelet[2648]: I0515 05:09:28.168754 2648 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-xtables-lock\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.168940 kubelet[2648]: I0515 05:09:28.168766 2648 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-etc-cni-netd\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.168940 kubelet[2648]: I0515 05:09:28.168777 2648 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gjrjs\" (UniqueName: \"kubernetes.io/projected/0fedb74b-4281-479e-9e30-febbe6c42751-kube-api-access-gjrjs\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.168940 kubelet[2648]: I0515 05:09:28.168788 2648 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fedb74b-4281-479e-9e30-febbe6c42751-clustermesh-secrets\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.169304 kubelet[2648]: I0515 05:09:28.168799 2648 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2dmt9\" (UniqueName: \"kubernetes.io/projected/81f32847-03ba-451a-a36c-94772ef4219e-kube-api-access-2dmt9\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.169304 kubelet[2648]: I0515 05:09:28.168810 2648 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-bpf-maps\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.169304 kubelet[2648]: I0515 05:09:28.168830 2648 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81f32847-03ba-451a-a36c-94772ef4219e-cilium-config-path\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.169304 kubelet[2648]: I0515 05:09:28.168841 2648 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-lib-modules\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.169304 kubelet[2648]: I0515 05:09:28.168853 2648 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-hostproc\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.169304 kubelet[2648]: I0515 05:09:28.168862 2648 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fedb74b-4281-479e-9e30-febbe6c42751-hubble-tls\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.169304 kubelet[2648]: I0515 05:09:28.168872 2648 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-cni-path\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.169555 kubelet[2648]: I0515 05:09:28.168882 2648 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fedb74b-4281-479e-9e30-febbe6c42751-cilium-config-path\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.169555 kubelet[2648]: I0515 05:09:28.168893 2648 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fedb74b-4281-479e-9e30-febbe6c42751-host-proc-sys-net\") on node \"ci-4152-2-3-n-5005c4e40f.novalocal\" DevicePath \"\"" May 15 05:09:28.316589 systemd[1]: Removed slice kubepods-besteffort-pod81f32847_03ba_451a_a36c_94772ef4219e.slice - libcontainer container kubepods-besteffort-pod81f32847_03ba_451a_a36c_94772ef4219e.slice. May 15 05:09:28.317590 systemd[1]: kubepods-besteffort-pod81f32847_03ba_451a_a36c_94772ef4219e.slice: Consumed 1.468s CPU time. May 15 05:09:28.485047 kubelet[2648]: I0515 05:09:28.484865 2648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f32847-03ba-451a-a36c-94772ef4219e" path="/var/lib/kubelet/pods/81f32847-03ba-451a-a36c-94772ef4219e/volumes" May 15 05:09:28.501413 systemd[1]: Removed slice kubepods-burstable-pod0fedb74b_4281_479e_9e30_febbe6c42751.slice - libcontainer container kubepods-burstable-pod0fedb74b_4281_479e_9e30_febbe6c42751.slice. May 15 05:09:28.502068 systemd[1]: kubepods-burstable-pod0fedb74b_4281_479e_9e30_febbe6c42751.slice: Consumed 11.501s CPU time. May 15 05:09:28.690551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1-rootfs.mount: Deactivated successfully. May 15 05:09:28.690822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec-rootfs.mount: Deactivated successfully. May 15 05:09:28.690984 systemd[1]: var-lib-kubelet-pods-0fedb74b\x2d4281\x2d479e\x2d9e30\x2dfebbe6c42751-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 05:09:28.691161 systemd[1]: var-lib-kubelet-pods-0fedb74b\x2d4281\x2d479e\x2d9e30\x2dfebbe6c42751-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 05:09:28.691386 systemd[1]: var-lib-kubelet-pods-81f32847\x2d03ba\x2d451a\x2da36c\x2d94772ef4219e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2dmt9.mount: Deactivated successfully. May 15 05:09:28.691570 systemd[1]: var-lib-kubelet-pods-0fedb74b\x2d4281\x2d479e\x2d9e30\x2dfebbe6c42751-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjrjs.mount: Deactivated successfully. May 15 05:09:28.714893 kubelet[2648]: E0515 05:09:28.714763 2648 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 05:09:29.033366 kubelet[2648]: I0515 05:09:29.032982 2648 scope.go:117] "RemoveContainer" containerID="4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f" May 15 05:09:29.046715 containerd[1475]: time="2025-05-15T05:09:29.046589727Z" level=info msg="RemoveContainer for \"4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f\"" May 15 05:09:29.060738 containerd[1475]: time="2025-05-15T05:09:29.060519378Z" level=info msg="RemoveContainer for \"4850ce5d469dd16b4b7c8bc95a6540b058c407d9e45e874577794551edac278f\" returns successfully" May 15 05:09:29.061558 kubelet[2648]: I0515 05:09:29.061417 2648 scope.go:117] "RemoveContainer" containerID="6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58" May 15 05:09:29.068762 containerd[1475]: time="2025-05-15T05:09:29.068680299Z" level=info msg="RemoveContainer for \"6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58\"" May 15 05:09:29.078693 containerd[1475]: time="2025-05-15T05:09:29.078411528Z" level=info msg="RemoveContainer for \"6bedfedc298f93e00c70b952d7a8008c7990670d014204209e2f8cdce015de58\" returns successfully" May 15 05:09:29.079460 kubelet[2648]: I0515 05:09:29.079401 2648 scope.go:117] "RemoveContainer" containerID="dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d" May 15 05:09:29.083699 containerd[1475]: time="2025-05-15T05:09:29.082573672Z" level=info msg="RemoveContainer for \"dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d\"" May 15 05:09:29.091117 containerd[1475]: time="2025-05-15T05:09:29.091070824Z" level=info msg="RemoveContainer for \"dfb6233e62b80b6122d32020302bcbbed0069f93de57d967a0df17210d67d12d\" returns successfully" May 15 05:09:29.092642 kubelet[2648]: I0515 05:09:29.092546 2648 scope.go:117] "RemoveContainer" containerID="c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3" May 15 05:09:29.099026 containerd[1475]: time="2025-05-15T05:09:29.098139685Z" level=info msg="RemoveContainer for \"c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3\"" May 15 05:09:29.103673 containerd[1475]: time="2025-05-15T05:09:29.103636174Z" level=info msg="RemoveContainer for \"c899204a4078196899f27343933844041501774699781ace64318000af1cb4b3\" returns successfully" May 15 05:09:29.104101 kubelet[2648]: I0515 05:09:29.104068 2648 scope.go:117] "RemoveContainer" containerID="a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53" May 15 05:09:29.105618 containerd[1475]: time="2025-05-15T05:09:29.105582548Z" level=info msg="RemoveContainer for \"a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53\"" May 15 05:09:29.110077 containerd[1475]: time="2025-05-15T05:09:29.110030208Z" level=info msg="RemoveContainer for \"a484c53b7ebf6e5b207cf1c0e31ef1587ef0ee03be238861c47165f9bfc7db53\" returns successfully" May 15 05:09:29.472265 kubelet[2648]: E0515 05:09:29.471946 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-45jw6" podUID="d8661aa3-ee39-47ff-b949-4c5dfdaef7c4" May 15 05:09:29.671394 sshd[4274]: Connection closed by 172.24.4.1 port 51188 May 15 05:09:29.673036 sshd-session[4272]: pam_unix(sshd:session): session closed for user core May 15 05:09:29.700002 systemd[1]: Started sshd@25-172.24.4.5:22-172.24.4.1:51192.service - OpenSSH per-connection server daemon (172.24.4.1:51192). May 15 05:09:29.702126 systemd[1]: sshd@24-172.24.4.5:22-172.24.4.1:51188.service: Deactivated successfully. May 15 05:09:29.710422 systemd[1]: session-27.scope: Deactivated successfully. May 15 05:09:29.711785 systemd[1]: session-27.scope: Consumed 1.280s CPU time. May 15 05:09:29.714082 systemd-logind[1440]: Session 27 logged out. Waiting for processes to exit. May 15 05:09:29.720699 systemd-logind[1440]: Removed session 27. May 15 05:09:30.479989 kubelet[2648]: I0515 05:09:30.479858 2648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fedb74b-4281-479e-9e30-febbe6c42751" path="/var/lib/kubelet/pods/0fedb74b-4281-479e-9e30-febbe6c42751/volumes" May 15 05:09:30.929258 sshd[4436]: Accepted publickey for core from 172.24.4.1 port 51192 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:09:30.932902 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:09:30.948466 systemd-logind[1440]: New session 28 of user core. May 15 05:09:30.957748 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 05:09:31.473849 kubelet[2648]: E0515 05:09:31.472763 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-45jw6" podUID="d8661aa3-ee39-47ff-b949-4c5dfdaef7c4" May 15 05:09:32.731635 kubelet[2648]: E0515 05:09:32.731518 2648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fedb74b-4281-479e-9e30-febbe6c42751" containerName="apply-sysctl-overwrites" May 15 05:09:32.737291 kubelet[2648]: E0515 05:09:32.735382 2648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fedb74b-4281-479e-9e30-febbe6c42751" containerName="mount-bpf-fs" May 15 05:09:32.737291 kubelet[2648]: E0515 05:09:32.735400 2648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fedb74b-4281-479e-9e30-febbe6c42751" containerName="clean-cilium-state" May 15 05:09:32.737291 kubelet[2648]: E0515 05:09:32.735408 2648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fedb74b-4281-479e-9e30-febbe6c42751" containerName="cilium-agent" May 15 05:09:32.737291 kubelet[2648]: E0515 05:09:32.735417 2648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81f32847-03ba-451a-a36c-94772ef4219e" containerName="cilium-operator" May 15 05:09:32.737291 kubelet[2648]: E0515 05:09:32.735459 2648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fedb74b-4281-479e-9e30-febbe6c42751" containerName="mount-cgroup" May 15 05:09:32.737291 kubelet[2648]: I0515 05:09:32.735554 2648 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f32847-03ba-451a-a36c-94772ef4219e" containerName="cilium-operator" May 15 05:09:32.737291 kubelet[2648]: I0515 05:09:32.735565 2648 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fedb74b-4281-479e-9e30-febbe6c42751" containerName="cilium-agent" May 15 05:09:32.757156 systemd[1]: Created slice kubepods-burstable-podbfb6473a_ca70_4d9d_97c0_930c755f3809.slice - libcontainer container kubepods-burstable-podbfb6473a_ca70_4d9d_97c0_930c755f3809.slice. May 15 05:09:32.843606 sshd[4440]: Connection closed by 172.24.4.1 port 51192 May 15 05:09:32.846858 sshd-session[4436]: pam_unix(sshd:session): session closed for user core May 15 05:09:32.862808 systemd[1]: sshd@25-172.24.4.5:22-172.24.4.1:51192.service: Deactivated successfully. May 15 05:09:32.868997 systemd[1]: session-28.scope: Deactivated successfully. May 15 05:09:32.869645 systemd[1]: session-28.scope: Consumed 1.251s CPU time. May 15 05:09:32.872432 systemd-logind[1440]: Session 28 logged out. Waiting for processes to exit. May 15 05:09:32.878740 systemd[1]: Started sshd@26-172.24.4.5:22-172.24.4.1:51208.service - OpenSSH per-connection server daemon (172.24.4.1:51208). May 15 05:09:32.881104 systemd-logind[1440]: Removed session 28. May 15 05:09:32.900148 kubelet[2648]: I0515 05:09:32.898854 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfb6473a-ca70-4d9d-97c0-930c755f3809-clustermesh-secrets\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900148 kubelet[2648]: I0515 05:09:32.898955 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-lib-modules\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900148 kubelet[2648]: I0515 05:09:32.899021 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bfb6473a-ca70-4d9d-97c0-930c755f3809-cilium-ipsec-secrets\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900148 kubelet[2648]: I0515 05:09:32.899082 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-cni-path\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900148 kubelet[2648]: I0515 05:09:32.899142 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-etc-cni-netd\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900148 kubelet[2648]: I0515 05:09:32.899196 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfb6473a-ca70-4d9d-97c0-930c755f3809-cilium-config-path\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900797 kubelet[2648]: I0515 05:09:32.899254 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-hostproc\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900797 kubelet[2648]: I0515 05:09:32.899310 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-xtables-lock\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900797 kubelet[2648]: I0515 05:09:32.899472 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-cilium-cgroup\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900797 kubelet[2648]: I0515 05:09:32.899539 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-host-proc-sys-net\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900797 kubelet[2648]: I0515 05:09:32.899593 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxxl7\" (UniqueName: \"kubernetes.io/projected/bfb6473a-ca70-4d9d-97c0-930c755f3809-kube-api-access-sxxl7\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.900797 kubelet[2648]: I0515 05:09:32.899681 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-cilium-run\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.901304 kubelet[2648]: I0515 05:09:32.899743 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-bpf-maps\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.901304 kubelet[2648]: I0515 05:09:32.899808 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfb6473a-ca70-4d9d-97c0-930c755f3809-host-proc-sys-kernel\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:32.901304 kubelet[2648]: I0515 05:09:32.899871 2648 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfb6473a-ca70-4d9d-97c0-930c755f3809-hubble-tls\") pod \"cilium-kz8sj\" (UID: \"bfb6473a-ca70-4d9d-97c0-930c755f3809\") " pod="kube-system/cilium-kz8sj" May 15 05:09:33.367910 containerd[1475]: time="2025-05-15T05:09:33.367554354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kz8sj,Uid:bfb6473a-ca70-4d9d-97c0-930c755f3809,Namespace:kube-system,Attempt:0,}" May 15 05:09:33.448921 containerd[1475]: time="2025-05-15T05:09:33.448560517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 05:09:33.448921 containerd[1475]: time="2025-05-15T05:09:33.448773617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 05:09:33.448921 containerd[1475]: time="2025-05-15T05:09:33.448826897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:09:33.450623 containerd[1475]: time="2025-05-15T05:09:33.449079171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 05:09:33.476303 kubelet[2648]: E0515 05:09:33.473991 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-45jw6" podUID="d8661aa3-ee39-47ff-b949-4c5dfdaef7c4" May 15 05:09:33.502522 systemd[1]: Started cri-containerd-91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4.scope - libcontainer container 91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4. May 15 05:09:33.537256 containerd[1475]: time="2025-05-15T05:09:33.537083511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kz8sj,Uid:bfb6473a-ca70-4d9d-97c0-930c755f3809,Namespace:kube-system,Attempt:0,} returns sandbox id \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\"" May 15 05:09:33.543311 containerd[1475]: time="2025-05-15T05:09:33.542896173Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 05:09:33.566061 containerd[1475]: time="2025-05-15T05:09:33.565997141Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09f3f08356f241a9c665ba7f546c0d54e13dea12ee62a24bfe4ba1a3fae1f63f\"" May 15 05:09:33.568918 containerd[1475]: time="2025-05-15T05:09:33.568334479Z" level=info msg="StartContainer for \"09f3f08356f241a9c665ba7f546c0d54e13dea12ee62a24bfe4ba1a3fae1f63f\"" May 15 05:09:33.601567 systemd[1]: Started cri-containerd-09f3f08356f241a9c665ba7f546c0d54e13dea12ee62a24bfe4ba1a3fae1f63f.scope - libcontainer container 09f3f08356f241a9c665ba7f546c0d54e13dea12ee62a24bfe4ba1a3fae1f63f. May 15 05:09:33.636298 containerd[1475]: time="2025-05-15T05:09:33.636154627Z" level=info msg="StartContainer for \"09f3f08356f241a9c665ba7f546c0d54e13dea12ee62a24bfe4ba1a3fae1f63f\" returns successfully" May 15 05:09:33.650149 systemd[1]: cri-containerd-09f3f08356f241a9c665ba7f546c0d54e13dea12ee62a24bfe4ba1a3fae1f63f.scope: Deactivated successfully. May 15 05:09:33.702491 containerd[1475]: time="2025-05-15T05:09:33.702296445Z" level=info msg="shim disconnected" id=09f3f08356f241a9c665ba7f546c0d54e13dea12ee62a24bfe4ba1a3fae1f63f namespace=k8s.io May 15 05:09:33.702701 containerd[1475]: time="2025-05-15T05:09:33.702483557Z" level=warning msg="cleaning up after shim disconnected" id=09f3f08356f241a9c665ba7f546c0d54e13dea12ee62a24bfe4ba1a3fae1f63f namespace=k8s.io May 15 05:09:33.702701 containerd[1475]: time="2025-05-15T05:09:33.702533100Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:09:33.717352 kubelet[2648]: E0515 05:09:33.716980 2648 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 05:09:34.095107 containerd[1475]: time="2025-05-15T05:09:34.094792367Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 05:09:34.113907 sshd[4449]: Accepted publickey for core from 172.24.4.1 port 51208 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:09:34.128952 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:09:34.168292 systemd-logind[1440]: New session 29 of user core. May 15 05:09:34.172100 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 05:09:34.172585 containerd[1475]: time="2025-05-15T05:09:34.171832253Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4\"" May 15 05:09:34.179392 containerd[1475]: time="2025-05-15T05:09:34.177528796Z" level=info msg="StartContainer for \"0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4\"" May 15 05:09:34.231502 systemd[1]: Started cri-containerd-0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4.scope - libcontainer container 0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4. May 15 05:09:34.273070 containerd[1475]: time="2025-05-15T05:09:34.272877173Z" level=info msg="StartContainer for \"0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4\" returns successfully" May 15 05:09:34.280689 systemd[1]: cri-containerd-0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4.scope: Deactivated successfully. May 15 05:09:34.318089 containerd[1475]: time="2025-05-15T05:09:34.317985615Z" level=info msg="shim disconnected" id=0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4 namespace=k8s.io May 15 05:09:34.318089 containerd[1475]: time="2025-05-15T05:09:34.318068079Z" level=warning msg="cleaning up after shim disconnected" id=0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4 namespace=k8s.io May 15 05:09:34.318089 containerd[1475]: time="2025-05-15T05:09:34.318078268Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:09:34.792369 sshd[4562]: Connection closed by 172.24.4.1 port 51208 May 15 05:09:34.793798 sshd-session[4449]: pam_unix(sshd:session): session closed for user core May 15 05:09:34.811155 systemd[1]: sshd@26-172.24.4.5:22-172.24.4.1:51208.service: Deactivated successfully. May 15 05:09:34.817915 systemd[1]: session-29.scope: Deactivated successfully. May 15 05:09:34.821396 systemd-logind[1440]: Session 29 logged out. Waiting for processes to exit. May 15 05:09:34.832006 systemd[1]: Started sshd@27-172.24.4.5:22-172.24.4.1:59284.service - OpenSSH per-connection server daemon (172.24.4.1:59284). May 15 05:09:34.835610 systemd-logind[1440]: Removed session 29. May 15 05:09:34.893195 kubelet[2648]: I0515 05:09:34.893087 2648 setters.go:600] "Node became not ready" node="ci-4152-2-3-n-5005c4e40f.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T05:09:34Z","lastTransitionTime":"2025-05-15T05:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 05:09:35.024055 systemd[1]: run-containerd-runc-k8s.io-0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4-runc.TZrF1W.mount: Deactivated successfully. May 15 05:09:35.024877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0faa2068ecf59f18e0aff2644e570fba73480e8f69305217be4a2bbf028078d4-rootfs.mount: Deactivated successfully. May 15 05:09:35.105572 containerd[1475]: time="2025-05-15T05:09:35.105270944Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 05:09:35.152202 containerd[1475]: time="2025-05-15T05:09:35.152042187Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c9f93df7e175c2c3e818dabd66fd65df9348ee675a02d7edb55add8e5be10056\"" May 15 05:09:35.155934 containerd[1475]: time="2025-05-15T05:09:35.155862327Z" level=info msg="StartContainer for \"c9f93df7e175c2c3e818dabd66fd65df9348ee675a02d7edb55add8e5be10056\"" May 15 05:09:35.218529 systemd[1]: Started cri-containerd-c9f93df7e175c2c3e818dabd66fd65df9348ee675a02d7edb55add8e5be10056.scope - libcontainer container c9f93df7e175c2c3e818dabd66fd65df9348ee675a02d7edb55add8e5be10056. May 15 05:09:35.267819 systemd[1]: cri-containerd-c9f93df7e175c2c3e818dabd66fd65df9348ee675a02d7edb55add8e5be10056.scope: Deactivated successfully. May 15 05:09:35.271922 containerd[1475]: time="2025-05-15T05:09:35.271866780Z" level=info msg="StartContainer for \"c9f93df7e175c2c3e818dabd66fd65df9348ee675a02d7edb55add8e5be10056\" returns successfully" May 15 05:09:35.295158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9f93df7e175c2c3e818dabd66fd65df9348ee675a02d7edb55add8e5be10056-rootfs.mount: Deactivated successfully. May 15 05:09:35.303812 containerd[1475]: time="2025-05-15T05:09:35.303723414Z" level=info msg="shim disconnected" id=c9f93df7e175c2c3e818dabd66fd65df9348ee675a02d7edb55add8e5be10056 namespace=k8s.io May 15 05:09:35.303812 containerd[1475]: time="2025-05-15T05:09:35.303811880Z" level=warning msg="cleaning up after shim disconnected" id=c9f93df7e175c2c3e818dabd66fd65df9348ee675a02d7edb55add8e5be10056 namespace=k8s.io May 15 05:09:35.305435 containerd[1475]: time="2025-05-15T05:09:35.303823923Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:09:35.472800 kubelet[2648]: E0515 05:09:35.472545 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-45jw6" podUID="d8661aa3-ee39-47ff-b949-4c5dfdaef7c4" May 15 05:09:36.052477 sshd[4626]: Accepted publickey for core from 172.24.4.1 port 59284 ssh2: RSA SHA256:uRPz9tu+U72XmHvlxQy05CWlny1Lwcfk85X9SSPLUAc May 15 05:09:36.057638 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 05:09:36.071577 systemd-logind[1440]: New session 30 of user core. May 15 05:09:36.079804 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 05:09:36.121220 containerd[1475]: time="2025-05-15T05:09:36.120896071Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 05:09:36.169307 containerd[1475]: time="2025-05-15T05:09:36.168921698Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0\"" May 15 05:09:36.171359 containerd[1475]: time="2025-05-15T05:09:36.171283491Z" level=info msg="StartContainer for \"ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0\"" May 15 05:09:36.220636 systemd[1]: Started cri-containerd-ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0.scope - libcontainer container ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0. May 15 05:09:36.247820 systemd[1]: cri-containerd-ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0.scope: Deactivated successfully. May 15 05:09:36.250746 containerd[1475]: time="2025-05-15T05:09:36.250464495Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb6473a_ca70_4d9d_97c0_930c755f3809.slice/cri-containerd-ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0.scope/memory.events\": no such file or directory" May 15 05:09:36.258146 containerd[1475]: time="2025-05-15T05:09:36.257968251Z" level=info msg="StartContainer for \"ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0\" returns successfully" May 15 05:09:36.287420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0-rootfs.mount: Deactivated successfully. May 15 05:09:36.294753 containerd[1475]: time="2025-05-15T05:09:36.294663849Z" level=info msg="shim disconnected" id=ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0 namespace=k8s.io May 15 05:09:36.294753 containerd[1475]: time="2025-05-15T05:09:36.294794383Z" level=warning msg="cleaning up after shim disconnected" id=ea8a3ea32469d17397bb99a69b32f04e51697b8babe276d07591a166571adee0 namespace=k8s.io May 15 05:09:36.294753 containerd[1475]: time="2025-05-15T05:09:36.294810734Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 05:09:37.132095 containerd[1475]: time="2025-05-15T05:09:37.131870054Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 05:09:37.200489 containerd[1475]: time="2025-05-15T05:09:37.198570758Z" level=info msg="CreateContainer within sandbox \"91ff1dcb247f1099682a656101cdb77d0577526f8e51c26ebbb6fd5f30473ef4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a88699014e9bacc606b22dde0bfcd394a5a36be2af2c229a47c4c6232f9e7bd5\"" May 15 05:09:37.200489 containerd[1475]: time="2025-05-15T05:09:37.199291341Z" level=info msg="StartContainer for \"a88699014e9bacc606b22dde0bfcd394a5a36be2af2c229a47c4c6232f9e7bd5\"" May 15 05:09:37.248707 systemd[1]: run-containerd-runc-k8s.io-a88699014e9bacc606b22dde0bfcd394a5a36be2af2c229a47c4c6232f9e7bd5-runc.BBl4yO.mount: Deactivated successfully. May 15 05:09:37.256488 systemd[1]: Started cri-containerd-a88699014e9bacc606b22dde0bfcd394a5a36be2af2c229a47c4c6232f9e7bd5.scope - libcontainer container a88699014e9bacc606b22dde0bfcd394a5a36be2af2c229a47c4c6232f9e7bd5. May 15 05:09:37.301982 containerd[1475]: time="2025-05-15T05:09:37.301929749Z" level=info msg="StartContainer for \"a88699014e9bacc606b22dde0bfcd394a5a36be2af2c229a47c4c6232f9e7bd5\" returns successfully" May 15 05:09:37.474522 kubelet[2648]: E0515 05:09:37.473747 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-45jw6" podUID="d8661aa3-ee39-47ff-b949-4c5dfdaef7c4" May 15 05:09:37.851954 kernel: cryptd: max_cpu_qlen set to 1000 May 15 05:09:38.002574 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 15 05:09:38.193958 kubelet[2648]: I0515 05:09:38.193623 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kz8sj" podStartSLOduration=6.193577725 podStartE2EDuration="6.193577725s" podCreationTimestamp="2025-05-15 05:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 05:09:38.191552443 +0000 UTC m=+359.842138278" watchObservedRunningTime="2025-05-15 05:09:38.193577725 +0000 UTC m=+359.844163549" May 15 05:09:38.533829 containerd[1475]: time="2025-05-15T05:09:38.533677595Z" level=info msg="StopPodSandbox for \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\"" May 15 05:09:38.534687 containerd[1475]: time="2025-05-15T05:09:38.533933144Z" level=info msg="TearDown network for sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" successfully" May 15 05:09:38.534687 containerd[1475]: time="2025-05-15T05:09:38.533954164Z" level=info msg="StopPodSandbox for \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" returns successfully" May 15 05:09:38.535651 containerd[1475]: time="2025-05-15T05:09:38.535587580Z" level=info msg="RemovePodSandbox for \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\"" May 15 05:09:38.535651 containerd[1475]: time="2025-05-15T05:09:38.535631553Z" level=info msg="Forcibly stopping sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\"" May 15 05:09:38.535926 containerd[1475]: time="2025-05-15T05:09:38.535697536Z" level=info msg="TearDown network for sandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" successfully" May 15 05:09:38.540198 containerd[1475]: time="2025-05-15T05:09:38.540118766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 05:09:38.540438 containerd[1475]: time="2025-05-15T05:09:38.540211049Z" level=info msg="RemovePodSandbox \"d3f1e5065262948b8ea3560a0a40bdeaee1ec9362437a9bfce87e8ff5e370bc1\" returns successfully" May 15 05:09:38.540812 containerd[1475]: time="2025-05-15T05:09:38.540749721Z" level=info msg="StopPodSandbox for \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\"" May 15 05:09:38.541063 containerd[1475]: time="2025-05-15T05:09:38.540864827Z" level=info msg="TearDown network for sandbox \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\" successfully" May 15 05:09:38.541063 containerd[1475]: time="2025-05-15T05:09:38.540886848Z" level=info msg="StopPodSandbox for \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\" returns successfully" May 15 05:09:38.541546 containerd[1475]: time="2025-05-15T05:09:38.541388199Z" level=info msg="RemovePodSandbox for \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\"" May 15 05:09:38.541546 containerd[1475]: time="2025-05-15T05:09:38.541412214Z" level=info msg="Forcibly stopping sandbox \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\"" May 15 05:09:38.541728 containerd[1475]: time="2025-05-15T05:09:38.541586782Z" level=info msg="TearDown network for sandbox \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\" successfully" May 15 05:09:38.548478 containerd[1475]: time="2025-05-15T05:09:38.548396996Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 05:09:38.548478 containerd[1475]: time="2025-05-15T05:09:38.548478980Z" level=info msg="RemovePodSandbox \"0c2b6faca86477086cf9616abb798570fc42da683bd6e4bff6972528df2b11ec\" returns successfully" May 15 05:09:41.776285 systemd-networkd[1371]: lxc_health: Link UP May 15 05:09:41.782221 systemd-networkd[1371]: lxc_health: Gained carrier May 15 05:09:43.310609 systemd-networkd[1371]: lxc_health: Gained IPv6LL May 15 05:09:45.486879 systemd[1]: run-containerd-runc-k8s.io-a88699014e9bacc606b22dde0bfcd394a5a36be2af2c229a47c4c6232f9e7bd5-runc.ZZSJdm.mount: Deactivated successfully. May 15 05:09:47.782673 kubelet[2648]: E0515 05:09:47.782620 2648 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44154->127.0.0.1:36517: write tcp 127.0.0.1:44154->127.0.0.1:36517: write: broken pipe May 15 05:09:47.945137 sshd[4685]: Connection closed by 172.24.4.1 port 59284 May 15 05:09:47.949682 sshd-session[4626]: pam_unix(sshd:session): session closed for user core May 15 05:09:47.957878 systemd[1]: sshd@27-172.24.4.5:22-172.24.4.1:59284.service: Deactivated successfully. May 15 05:09:47.962086 systemd[1]: session-30.scope: Deactivated successfully. May 15 05:09:47.963103 systemd-logind[1440]: Session 30 logged out. Waiting for processes to exit. May 15 05:09:47.964156 systemd-logind[1440]: Removed session 30.