May 13 05:40:14.058031 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 05:40:14.058087 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 05:40:14.058111 kernel: BIOS-provided physical RAM map: May 13 05:40:14.058129 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 05:40:14.058146 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 05:40:14.058168 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 05:40:14.058189 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 05:40:14.058207 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 05:40:14.058225 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 05:40:14.058243 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 05:40:14.058261 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 05:40:14.058279 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 05:40:14.058297 kernel: NX (Execute Disable) protection: active May 13 05:40:14.058319 kernel: APIC: Static calls initialized May 13 05:40:14.058341 kernel: SMBIOS 3.0.0 present. May 13 05:40:14.058360 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 05:40:14.058379 kernel: Hypervisor detected: KVM May 13 05:40:14.058398 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 05:40:14.058417 kernel: kvm-clock: using sched offset of 3589988010 cycles May 13 05:40:14.058440 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 05:40:14.058520 kernel: tsc: Detected 1996.249 MHz processor May 13 05:40:14.058544 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 05:40:14.058565 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 05:40:14.058584 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 05:40:14.058604 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 05:40:14.058624 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 05:40:14.058643 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 05:40:14.058662 kernel: ACPI: Early table checksum verification disabled May 13 05:40:14.058687 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 05:40:14.058707 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 05:40:14.058726 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 05:40:14.058746 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 05:40:14.058765 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 05:40:14.058784 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 05:40:14.058803 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 05:40:14.058822 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 05:40:14.058846 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 05:40:14.058865 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 05:40:14.058884 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 05:40:14.058903 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 05:40:14.058930 kernel: No NUMA configuration found May 13 05:40:14.058950 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 05:40:14.058970 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] May 13 05:40:14.058994 kernel: Zone ranges: May 13 05:40:14.059036 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 05:40:14.059057 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 05:40:14.059077 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 05:40:14.059097 kernel: Movable zone start for each node May 13 05:40:14.059117 kernel: Early memory node ranges May 13 05:40:14.059137 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 05:40:14.059157 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 05:40:14.059181 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 05:40:14.059201 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 05:40:14.059221 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 05:40:14.059241 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 05:40:14.059262 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 05:40:14.059282 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 05:40:14.059302 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 05:40:14.059322 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 05:40:14.059342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 05:40:14.059366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 05:40:14.059386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 05:40:14.059406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 05:40:14.059426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 05:40:14.059446 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 05:40:14.059496 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 05:40:14.059516 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 05:40:14.059537 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 05:40:14.059557 kernel: Booting paravirtualized kernel on KVM May 13 05:40:14.059582 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 05:40:14.059603 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 05:40:14.059623 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 05:40:14.059643 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 05:40:14.059663 kernel: pcpu-alloc: [0] 0 1 May 13 05:40:14.059682 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 05:40:14.059706 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 05:40:14.059728 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 05:40:14.059752 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 05:40:14.059772 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 05:40:14.059792 kernel: Fallback order for Node 0: 0 May 13 05:40:14.059813 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 05:40:14.059832 kernel: Policy zone: Normal May 13 05:40:14.059853 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 05:40:14.059872 kernel: software IO TLB: area num 2. May 13 05:40:14.059893 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 227296K reserved, 0K cma-reserved) May 13 05:40:14.059914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 05:40:14.059937 kernel: ftrace: allocating 37944 entries in 149 pages May 13 05:40:14.059957 kernel: ftrace: allocated 149 pages with 4 groups May 13 05:40:14.059977 kernel: Dynamic Preempt: voluntary May 13 05:40:14.059997 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 05:40:14.060019 kernel: rcu: RCU event tracing is enabled. May 13 05:40:14.060039 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 05:40:14.060060 kernel: Trampoline variant of Tasks RCU enabled. May 13 05:40:14.060081 kernel: Rude variant of Tasks RCU enabled. May 13 05:40:14.060101 kernel: Tracing variant of Tasks RCU enabled. May 13 05:40:14.060125 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 05:40:14.060146 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 05:40:14.060165 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 05:40:14.060186 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 05:40:14.060205 kernel: Console: colour VGA+ 80x25 May 13 05:40:14.060225 kernel: printk: console [tty0] enabled May 13 05:40:14.060245 kernel: printk: console [ttyS0] enabled May 13 05:40:14.060265 kernel: ACPI: Core revision 20230628 May 13 05:40:14.060285 kernel: APIC: Switch to symmetric I/O mode setup May 13 05:40:14.060309 kernel: x2apic enabled May 13 05:40:14.060329 kernel: APIC: Switched APIC routing to: physical x2apic May 13 05:40:14.060349 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 05:40:14.060369 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 05:40:14.060390 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 05:40:14.060410 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 05:40:14.060430 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 05:40:14.060450 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 05:40:14.060521 kernel: Spectre V2 : Mitigation: Retpolines May 13 05:40:14.060548 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 05:40:14.060568 kernel: Speculative Store Bypass: Vulnerable May 13 05:40:14.060588 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 05:40:14.060608 kernel: Freeing SMP alternatives memory: 32K May 13 05:40:14.060628 kernel: pid_max: default: 32768 minimum: 301 May 13 05:40:14.060662 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 05:40:14.060687 kernel: landlock: Up and running. May 13 05:40:14.060708 kernel: SELinux: Initializing. May 13 05:40:14.060729 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 05:40:14.060750 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 05:40:14.060772 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 05:40:14.060793 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 05:40:14.060819 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 05:40:14.060841 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 05:40:14.060862 kernel: Performance Events: AMD PMU driver. May 13 05:40:14.060883 kernel: ... version: 0 May 13 05:40:14.060904 kernel: ... bit width: 48 May 13 05:40:14.060928 kernel: ... generic registers: 4 May 13 05:40:14.060949 kernel: ... value mask: 0000ffffffffffff May 13 05:40:14.060970 kernel: ... max period: 00007fffffffffff May 13 05:40:14.060991 kernel: ... fixed-purpose events: 0 May 13 05:40:14.061012 kernel: ... event mask: 000000000000000f May 13 05:40:14.061033 kernel: signal: max sigframe size: 1440 May 13 05:40:14.061054 kernel: rcu: Hierarchical SRCU implementation. May 13 05:40:14.061076 kernel: rcu: Max phase no-delay instances is 400. May 13 05:40:14.061097 kernel: smp: Bringing up secondary CPUs ... May 13 05:40:14.061122 kernel: smpboot: x86: Booting SMP configuration: May 13 05:40:14.061143 kernel: .... node #0, CPUs: #1 May 13 05:40:14.061164 kernel: smp: Brought up 1 node, 2 CPUs May 13 05:40:14.061184 kernel: smpboot: Max logical packages: 2 May 13 05:40:14.061206 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 05:40:14.061227 kernel: devtmpfs: initialized May 13 05:40:14.061248 kernel: x86/mm: Memory block size: 128MB May 13 05:40:14.061269 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 05:40:14.061290 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 05:40:14.061315 kernel: pinctrl core: initialized pinctrl subsystem May 13 05:40:14.061336 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 05:40:14.061357 kernel: audit: initializing netlink subsys (disabled) May 13 05:40:14.061378 kernel: audit: type=2000 audit(1747114813.808:1): state=initialized audit_enabled=0 res=1 May 13 05:40:14.061399 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 05:40:14.061420 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 05:40:14.061441 kernel: cpuidle: using governor menu May 13 05:40:14.061680 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 05:40:14.061715 kernel: dca service started, version 1.12.1 May 13 05:40:14.061744 kernel: PCI: Using configuration type 1 for base access May 13 05:40:14.061767 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 05:40:14.061789 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 05:40:14.061811 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 05:40:14.061832 kernel: ACPI: Added _OSI(Module Device) May 13 05:40:14.061853 kernel: ACPI: Added _OSI(Processor Device) May 13 05:40:14.061874 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 05:40:14.061896 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 05:40:14.061917 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 05:40:14.061942 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 05:40:14.061963 kernel: ACPI: Interpreter enabled May 13 05:40:14.061984 kernel: ACPI: PM: (supports S0 S3 S5) May 13 05:40:14.062006 kernel: ACPI: Using IOAPIC for interrupt routing May 13 05:40:14.062027 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 05:40:14.062049 kernel: PCI: Using E820 reservations for host bridge windows May 13 05:40:14.062070 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 05:40:14.062092 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 05:40:14.062399 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 05:40:14.063645 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 05:40:14.064211 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 05:40:14.064251 kernel: acpiphp: Slot [3] registered May 13 05:40:14.064273 kernel: acpiphp: Slot [4] registered May 13 05:40:14.064294 kernel: acpiphp: Slot [5] registered May 13 05:40:14.064315 kernel: acpiphp: Slot [6] registered May 13 05:40:14.064336 kernel: acpiphp: Slot [7] registered May 13 05:40:14.064357 kernel: acpiphp: Slot [8] registered May 13 05:40:14.064387 kernel: acpiphp: Slot [9] registered May 13 05:40:14.064408 kernel: acpiphp: Slot [10] registered May 13 05:40:14.064429 kernel: acpiphp: Slot [11] registered May 13 05:40:14.064450 kernel: acpiphp: Slot [12] registered May 13 05:40:14.066555 kernel: acpiphp: Slot [13] registered May 13 05:40:14.066584 kernel: acpiphp: Slot [14] registered May 13 05:40:14.066606 kernel: acpiphp: Slot [15] registered May 13 05:40:14.066627 kernel: acpiphp: Slot [16] registered May 13 05:40:14.066648 kernel: acpiphp: Slot [17] registered May 13 05:40:14.066678 kernel: acpiphp: Slot [18] registered May 13 05:40:14.066699 kernel: acpiphp: Slot [19] registered May 13 05:40:14.066720 kernel: acpiphp: Slot [20] registered May 13 05:40:14.066741 kernel: acpiphp: Slot [21] registered May 13 05:40:14.066761 kernel: acpiphp: Slot [22] registered May 13 05:40:14.066782 kernel: acpiphp: Slot [23] registered May 13 05:40:14.066803 kernel: acpiphp: Slot [24] registered May 13 05:40:14.066824 kernel: acpiphp: Slot [25] registered May 13 05:40:14.066845 kernel: acpiphp: Slot [26] registered May 13 05:40:14.066870 kernel: acpiphp: Slot [27] registered May 13 05:40:14.066891 kernel: acpiphp: Slot [28] registered May 13 05:40:14.066911 kernel: acpiphp: Slot [29] registered May 13 05:40:14.066932 kernel: acpiphp: Slot [30] registered May 13 05:40:14.066953 kernel: acpiphp: Slot [31] registered May 13 05:40:14.066974 kernel: PCI host bridge to bus 0000:00 May 13 05:40:14.067248 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 05:40:14.067514 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 05:40:14.067734 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 05:40:14.067945 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 05:40:14.068140 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 05:40:14.068338 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 05:40:14.069748 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 05:40:14.070003 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 05:40:14.070254 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 05:40:14.071289 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 05:40:14.073568 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 05:40:14.073816 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 05:40:14.074045 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 05:40:14.074268 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 05:40:14.074595 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 05:40:14.074833 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 05:40:14.075151 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 05:40:14.075398 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 05:40:14.076779 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 05:40:14.077019 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 05:40:14.077247 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 05:40:14.080530 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 05:40:14.080813 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 05:40:14.081060 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 05:40:14.081289 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 05:40:14.081557 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 05:40:14.081792 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 05:40:14.082019 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 05:40:14.082261 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 05:40:14.086694 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 05:40:14.086892 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 05:40:14.087090 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 05:40:14.087274 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 05:40:14.087445 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 05:40:14.087651 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 05:40:14.087834 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 05:40:14.088015 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 05:40:14.088180 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 05:40:14.088303 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 05:40:14.088316 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 05:40:14.088325 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 05:40:14.088334 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 05:40:14.088343 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 05:40:14.088352 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 05:40:14.088365 kernel: iommu: Default domain type: Translated May 13 05:40:14.088373 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 05:40:14.088382 kernel: PCI: Using ACPI for IRQ routing May 13 05:40:14.088391 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 05:40:14.088399 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 05:40:14.088408 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 05:40:14.090541 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 05:40:14.090640 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 05:40:14.090732 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 05:40:14.090750 kernel: vgaarb: loaded May 13 05:40:14.090759 kernel: clocksource: Switched to clocksource kvm-clock May 13 05:40:14.090768 kernel: VFS: Disk quotas dquot_6.6.0 May 13 05:40:14.090777 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 05:40:14.090786 kernel: pnp: PnP ACPI init May 13 05:40:14.090884 kernel: pnp 00:03: [dma 2] May 13 05:40:14.090899 kernel: pnp: PnP ACPI: found 5 devices May 13 05:40:14.090908 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 05:40:14.090920 kernel: NET: Registered PF_INET protocol family May 13 05:40:14.090929 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 05:40:14.090937 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 05:40:14.090946 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 05:40:14.090955 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 05:40:14.090964 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 05:40:14.090972 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 05:40:14.090981 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 05:40:14.090990 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 05:40:14.091000 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 05:40:14.091008 kernel: NET: Registered PF_XDP protocol family May 13 05:40:14.091107 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 05:40:14.091188 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 05:40:14.091267 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 05:40:14.091346 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 05:40:14.091424 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 05:40:14.092739 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 05:40:14.092843 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 05:40:14.092858 kernel: PCI: CLS 0 bytes, default 64 May 13 05:40:14.092867 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 05:40:14.092876 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 05:40:14.092884 kernel: Initialise system trusted keyrings May 13 05:40:14.092893 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 05:40:14.092902 kernel: Key type asymmetric registered May 13 05:40:14.092911 kernel: Asymmetric key parser 'x509' registered May 13 05:40:14.092919 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 05:40:14.092931 kernel: io scheduler mq-deadline registered May 13 05:40:14.092940 kernel: io scheduler kyber registered May 13 05:40:14.092948 kernel: io scheduler bfq registered May 13 05:40:14.092957 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 05:40:14.092966 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 05:40:14.092975 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 05:40:14.092984 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 05:40:14.092993 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 05:40:14.093001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 05:40:14.093012 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 05:40:14.093021 kernel: random: crng init done May 13 05:40:14.093030 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 05:40:14.093038 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 05:40:14.093047 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 05:40:14.093138 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 05:40:14.093153 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 05:40:14.093232 kernel: rtc_cmos 00:04: registered as rtc0 May 13 05:40:14.093320 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T05:40:13 UTC (1747114813) May 13 05:40:14.093423 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 05:40:14.093438 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 05:40:14.093447 kernel: NET: Registered PF_INET6 protocol family May 13 05:40:14.093455 kernel: Segment Routing with IPv6 May 13 05:40:14.096062 kernel: In-situ OAM (IOAM) with IPv6 May 13 05:40:14.096073 kernel: NET: Registered PF_PACKET protocol family May 13 05:40:14.096083 kernel: Key type dns_resolver registered May 13 05:40:14.096092 kernel: IPI shorthand broadcast: enabled May 13 05:40:14.096106 kernel: sched_clock: Marking stable (978007766, 168818712)->(1173097924, -26271446) May 13 05:40:14.096129 kernel: registered taskstats version 1 May 13 05:40:14.096139 kernel: Loading compiled-in X.509 certificates May 13 05:40:14.096149 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 05:40:14.096158 kernel: Key type .fscrypt registered May 13 05:40:14.096167 kernel: Key type fscrypt-provisioning registered May 13 05:40:14.096176 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 05:40:14.096186 kernel: ima: Allocated hash algorithm: sha1 May 13 05:40:14.096198 kernel: ima: No architecture policies found May 13 05:40:14.096207 kernel: clk: Disabling unused clocks May 13 05:40:14.096216 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 05:40:14.096226 kernel: Write protecting the kernel read-only data: 36864k May 13 05:40:14.096236 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 05:40:14.096246 kernel: Run /init as init process May 13 05:40:14.096255 kernel: with arguments: May 13 05:40:14.096263 kernel: /init May 13 05:40:14.096272 kernel: with environment: May 13 05:40:14.096280 kernel: HOME=/ May 13 05:40:14.096291 kernel: TERM=linux May 13 05:40:14.096299 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 05:40:14.096311 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 05:40:14.096322 systemd[1]: Detected virtualization kvm. May 13 05:40:14.096332 systemd[1]: Detected architecture x86-64. May 13 05:40:14.096341 systemd[1]: Running in initrd. May 13 05:40:14.096351 systemd[1]: No hostname configured, using default hostname. May 13 05:40:14.096362 systemd[1]: Hostname set to . May 13 05:40:14.096372 systemd[1]: Initializing machine ID from VM UUID. May 13 05:40:14.096381 systemd[1]: Queued start job for default target initrd.target. May 13 05:40:14.096390 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 05:40:14.096400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 05:40:14.096410 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 05:40:14.096420 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 05:40:14.096448 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 05:40:14.096483 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 05:40:14.096495 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 05:40:14.096505 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 05:40:14.096515 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 05:40:14.096528 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 05:40:14.096538 systemd[1]: Reached target paths.target - Path Units. May 13 05:40:14.096547 systemd[1]: Reached target slices.target - Slice Units. May 13 05:40:14.096557 systemd[1]: Reached target swap.target - Swaps. May 13 05:40:14.096566 systemd[1]: Reached target timers.target - Timer Units. May 13 05:40:14.096576 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 05:40:14.096586 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 05:40:14.096595 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 05:40:14.096605 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 05:40:14.096617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 05:40:14.096626 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 05:40:14.096636 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 05:40:14.096646 systemd[1]: Reached target sockets.target - Socket Units. May 13 05:40:14.096655 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 05:40:14.096665 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 05:40:14.096675 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 05:40:14.096684 systemd[1]: Starting systemd-fsck-usr.service... May 13 05:40:14.096696 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 05:40:14.096706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 05:40:14.096715 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 05:40:14.096746 systemd-journald[184]: Collecting audit messages is disabled. May 13 05:40:14.096772 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 05:40:14.096782 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 05:40:14.096792 systemd[1]: Finished systemd-fsck-usr.service. May 13 05:40:14.096802 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 05:40:14.096812 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 05:40:14.096823 kernel: Bridge firewalling registered May 13 05:40:14.096833 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 05:40:14.096844 systemd-journald[184]: Journal started May 13 05:40:14.096869 systemd-journald[184]: Runtime Journal (/run/log/journal/e2ab151f0ee94b44a3f5b6699dfa5707) is 8.0M, max 78.3M, 70.3M free. May 13 05:40:14.044166 systemd-modules-load[185]: Inserted module 'overlay' May 13 05:40:14.141662 systemd[1]: Started systemd-journald.service - Journal Service. May 13 05:40:14.093382 systemd-modules-load[185]: Inserted module 'br_netfilter' May 13 05:40:14.142288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 05:40:14.143868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 05:40:14.150587 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 05:40:14.155652 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 05:40:14.156962 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 05:40:14.161383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 05:40:14.177606 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 05:40:14.186091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 05:40:14.190963 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 05:40:14.191904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 05:40:14.199739 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 05:40:14.213111 dracut-cmdline[218]: dracut-dracut-053 May 13 05:40:14.213111 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 05:40:14.210647 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 05:40:14.246296 systemd-resolved[228]: Positive Trust Anchors: May 13 05:40:14.247603 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 05:40:14.247647 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 05:40:14.250939 systemd-resolved[228]: Defaulting to hostname 'linux'. May 13 05:40:14.251841 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 05:40:14.253045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 05:40:14.303553 kernel: SCSI subsystem initialized May 13 05:40:14.314521 kernel: Loading iSCSI transport class v2.0-870. May 13 05:40:14.326867 kernel: iscsi: registered transport (tcp) May 13 05:40:14.349594 kernel: iscsi: registered transport (qla4xxx) May 13 05:40:14.349667 kernel: QLogic iSCSI HBA Driver May 13 05:40:14.409032 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 05:40:14.416793 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 05:40:14.469713 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 05:40:14.469833 kernel: device-mapper: uevent: version 1.0.3 May 13 05:40:14.471091 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 05:40:14.518530 kernel: raid6: sse2x4 gen() 12776 MB/s May 13 05:40:14.536518 kernel: raid6: sse2x2 gen() 14654 MB/s May 13 05:40:14.554797 kernel: raid6: sse2x1 gen() 9878 MB/s May 13 05:40:14.554868 kernel: raid6: using algorithm sse2x2 gen() 14654 MB/s May 13 05:40:14.573830 kernel: raid6: .... xor() 9347 MB/s, rmw enabled May 13 05:40:14.573892 kernel: raid6: using ssse3x2 recovery algorithm May 13 05:40:14.597206 kernel: xor: measuring software checksum speed May 13 05:40:14.597275 kernel: prefetch64-sse : 18481 MB/sec May 13 05:40:14.597729 kernel: generic_sse : 16790 MB/sec May 13 05:40:14.598857 kernel: xor: using function: prefetch64-sse (18481 MB/sec) May 13 05:40:14.786521 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 05:40:14.804019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 05:40:14.809801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 05:40:14.824237 systemd-udevd[402]: Using default interface naming scheme 'v255'. May 13 05:40:14.828801 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 05:40:14.842377 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 05:40:14.859035 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation May 13 05:40:14.901788 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 05:40:14.907779 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 05:40:14.951563 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 05:40:14.963692 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 05:40:15.007877 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 05:40:15.009817 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 05:40:15.011328 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 05:40:15.012573 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 05:40:15.018983 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 05:40:15.034678 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 05:40:15.051576 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 13 05:40:15.057397 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 05:40:15.062134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 05:40:15.062267 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 05:40:15.064290 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 05:40:15.065573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 05:40:15.065712 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 05:40:15.067162 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 05:40:15.077705 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 05:40:15.077746 kernel: GPT:17805311 != 20971519 May 13 05:40:15.077758 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 05:40:15.077769 kernel: GPT:17805311 != 20971519 May 13 05:40:15.077780 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 05:40:15.077791 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 05:40:15.076701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 05:40:15.088591 kernel: libata version 3.00 loaded. May 13 05:40:15.088618 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 05:40:15.091516 kernel: scsi host0: ata_piix May 13 05:40:15.094481 kernel: scsi host1: ata_piix May 13 05:40:15.105362 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 05:40:15.105410 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 05:40:15.117278 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 05:40:15.165147 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (451) May 13 05:40:15.165172 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (448) May 13 05:40:15.169741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 05:40:15.196175 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 05:40:15.204785 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 05:40:15.205878 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 05:40:15.214440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 05:40:15.220623 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 05:40:15.224612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 05:40:15.232491 disk-uuid[504]: Primary Header is updated. May 13 05:40:15.232491 disk-uuid[504]: Secondary Entries is updated. May 13 05:40:15.232491 disk-uuid[504]: Secondary Header is updated. May 13 05:40:15.242726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 05:40:15.246512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 05:40:15.246610 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 05:40:16.263548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 05:40:16.265340 disk-uuid[506]: The operation has completed successfully. May 13 05:40:16.337295 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 05:40:16.337590 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 05:40:16.374595 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 05:40:16.379858 sh[527]: Success May 13 05:40:16.398507 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 05:40:16.475852 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 05:40:16.485784 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 05:40:16.488010 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 05:40:16.527493 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 05:40:16.527549 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 05:40:16.527563 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 05:40:16.536321 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 05:40:16.536349 kernel: BTRFS info (device dm-0): using free space tree May 13 05:40:16.557015 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 05:40:16.559511 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 05:40:16.566777 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 05:40:16.585831 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 05:40:16.615689 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 05:40:16.615765 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 05:40:16.619721 kernel: BTRFS info (device vda6): using free space tree May 13 05:40:16.634578 kernel: BTRFS info (device vda6): auto enabling async discard May 13 05:40:16.661158 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 05:40:16.662935 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 05:40:16.673660 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 05:40:16.684864 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 05:40:16.722236 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 05:40:16.735712 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 05:40:16.759779 systemd-networkd[709]: lo: Link UP May 13 05:40:16.759789 systemd-networkd[709]: lo: Gained carrier May 13 05:40:16.761037 systemd-networkd[709]: Enumeration completed May 13 05:40:16.761417 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 05:40:16.762039 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 05:40:16.762042 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 05:40:16.763266 systemd-networkd[709]: eth0: Link UP May 13 05:40:16.763270 systemd-networkd[709]: eth0: Gained carrier May 13 05:40:16.763278 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 05:40:16.763602 systemd[1]: Reached target network.target - Network. May 13 05:40:16.777509 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.224/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 05:40:16.835902 ignition[650]: Ignition 2.19.0 May 13 05:40:16.835918 ignition[650]: Stage: fetch-offline May 13 05:40:16.835955 ignition[650]: no configs at "/usr/lib/ignition/base.d" May 13 05:40:16.835965 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 05:40:16.840113 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 05:40:16.836056 ignition[650]: parsed url from cmdline: "" May 13 05:40:16.840449 systemd-resolved[228]: Detected conflict on linux IN A 172.24.4.224 May 13 05:40:16.836060 ignition[650]: no config URL provided May 13 05:40:16.840478 systemd-resolved[228]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. May 13 05:40:16.836067 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" May 13 05:40:16.836076 ignition[650]: no config at "/usr/lib/ignition/user.ign" May 13 05:40:16.836081 ignition[650]: failed to fetch config: resource requires networking May 13 05:40:16.836264 ignition[650]: Ignition finished successfully May 13 05:40:16.849898 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 05:40:16.889575 ignition[721]: Ignition 2.19.0 May 13 05:40:16.889601 ignition[721]: Stage: fetch May 13 05:40:16.890025 ignition[721]: no configs at "/usr/lib/ignition/base.d" May 13 05:40:16.890053 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 05:40:16.890276 ignition[721]: parsed url from cmdline: "" May 13 05:40:16.890286 ignition[721]: no config URL provided May 13 05:40:16.890299 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" May 13 05:40:16.890319 ignition[721]: no config at "/usr/lib/ignition/user.ign" May 13 05:40:16.890510 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 05:40:16.890547 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 05:40:16.890645 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 05:40:17.350802 ignition[721]: GET result: OK May 13 05:40:17.350960 ignition[721]: parsing config with SHA512: 338bbad1110cdf64f02d9301c2d53e2e88c653c884490106f1ed8e51084e8312faab8656e5d644c7f8f6fc8b053d1120235c9514ca7f00f5f18df287e350eed0 May 13 05:40:17.360336 unknown[721]: fetched base config from "system" May 13 05:40:17.360377 unknown[721]: fetched base config from "system" May 13 05:40:17.360396 unknown[721]: fetched user config from "openstack" May 13 05:40:17.362111 ignition[721]: fetch: fetch complete May 13 05:40:17.362130 ignition[721]: fetch: fetch passed May 13 05:40:17.364869 ignition[721]: Ignition finished successfully May 13 05:40:17.368763 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 05:40:17.379636 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 05:40:17.407391 ignition[728]: Ignition 2.19.0 May 13 05:40:17.407407 ignition[728]: Stage: kargs May 13 05:40:17.407889 ignition[728]: no configs at "/usr/lib/ignition/base.d" May 13 05:40:17.407916 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 05:40:17.410143 ignition[728]: kargs: kargs passed May 13 05:40:17.410241 ignition[728]: Ignition finished successfully May 13 05:40:17.411673 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 05:40:17.418815 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 05:40:17.433691 ignition[734]: Ignition 2.19.0 May 13 05:40:17.433705 ignition[734]: Stage: disks May 13 05:40:17.433993 ignition[734]: no configs at "/usr/lib/ignition/base.d" May 13 05:40:17.437307 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 05:40:17.434007 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 05:40:17.435069 ignition[734]: disks: disks passed May 13 05:40:17.439897 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 05:40:17.435122 ignition[734]: Ignition finished successfully May 13 05:40:17.441919 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 05:40:17.443688 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 05:40:17.446036 systemd[1]: Reached target sysinit.target - System Initialization. May 13 05:40:17.449263 systemd[1]: Reached target basic.target - Basic System. May 13 05:40:17.459817 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 05:40:17.483510 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 13 05:40:17.492227 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 05:40:17.499703 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 05:40:17.612501 kernel: EXT4-fs (vda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 05:40:17.614251 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 05:40:17.616431 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 05:40:17.627645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 05:40:17.631097 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 05:40:17.632627 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 05:40:17.638638 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 13 05:40:17.641308 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 05:40:17.644365 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (750) May 13 05:40:17.643320 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 05:40:17.646737 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 05:40:17.651611 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 05:40:17.664318 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 05:40:17.669820 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 05:40:17.669848 kernel: BTRFS info (device vda6): using free space tree May 13 05:40:17.681491 kernel: BTRFS info (device vda6): auto enabling async discard May 13 05:40:17.684969 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 05:40:17.786024 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory May 13 05:40:17.792845 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory May 13 05:40:17.799762 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory May 13 05:40:17.811552 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory May 13 05:40:17.963831 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 05:40:17.972636 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 05:40:17.981827 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 05:40:18.001458 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 05:40:18.007567 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 05:40:18.042365 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 05:40:18.052493 ignition[866]: INFO : Ignition 2.19.0 May 13 05:40:18.052493 ignition[866]: INFO : Stage: mount May 13 05:40:18.052493 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 05:40:18.052493 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 05:40:18.061211 ignition[866]: INFO : mount: mount passed May 13 05:40:18.061211 ignition[866]: INFO : Ignition finished successfully May 13 05:40:18.056387 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 05:40:18.212774 systemd-networkd[709]: eth0: Gained IPv6LL May 13 05:40:24.876845 coreos-metadata[752]: May 13 05:40:24.876 WARN failed to locate config-drive, using the metadata service API instead May 13 05:40:24.917738 coreos-metadata[752]: May 13 05:40:24.917 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 05:40:24.933329 coreos-metadata[752]: May 13 05:40:24.933 INFO Fetch successful May 13 05:40:24.934863 coreos-metadata[752]: May 13 05:40:24.934 INFO wrote hostname ci-4081-3-3-n-f146884e63.novalocal to /sysroot/etc/hostname May 13 05:40:24.938498 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 05:40:24.938767 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 13 05:40:24.954208 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 05:40:24.979827 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 05:40:24.998565 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (883) May 13 05:40:25.007248 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 05:40:25.007311 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 05:40:25.012580 kernel: BTRFS info (device vda6): using free space tree May 13 05:40:25.025548 kernel: BTRFS info (device vda6): auto enabling async discard May 13 05:40:25.031090 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 05:40:25.070054 ignition[901]: INFO : Ignition 2.19.0 May 13 05:40:25.070054 ignition[901]: INFO : Stage: files May 13 05:40:25.070054 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 05:40:25.070054 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 05:40:25.076905 ignition[901]: DEBUG : files: compiled without relabeling support, skipping May 13 05:40:25.076905 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 05:40:25.076905 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 05:40:25.083643 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 05:40:25.083643 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 05:40:25.083643 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 05:40:25.083643 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 05:40:25.083643 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 05:40:25.079533 unknown[901]: wrote ssh authorized keys file for user: core May 13 05:40:25.204111 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 05:40:25.794729 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 05:40:25.800167 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 05:40:25.800167 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 05:40:26.479362 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 05:40:27.471363 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 05:40:27.471363 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 05:40:27.471363 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 05:40:27.471363 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 05:40:27.471363 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 05:40:27.471363 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 05:40:27.483819 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 05:40:27.483819 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 05:40:27.483819 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 05:40:27.483819 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 05:40:27.483819 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 05:40:27.483819 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 05:40:27.483819 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 05:40:27.483819 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 05:40:27.483819 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 05:40:28.044828 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 05:40:30.510059 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 05:40:30.510059 ignition[901]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 05:40:30.515673 ignition[901]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 05:40:30.515673 ignition[901]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 05:40:30.515673 ignition[901]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 05:40:30.515673 ignition[901]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 13 05:40:30.515673 ignition[901]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 13 05:40:30.515673 ignition[901]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 05:40:30.515673 ignition[901]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 05:40:30.515673 ignition[901]: INFO : files: files passed May 13 05:40:30.515673 ignition[901]: INFO : Ignition finished successfully May 13 05:40:30.517441 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 05:40:30.532296 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 05:40:30.547812 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 05:40:30.556689 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 05:40:30.558373 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 05:40:30.573186 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 05:40:30.575710 initrd-setup-root-after-ignition[929]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 05:40:30.577767 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 05:40:30.581018 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 05:40:30.592041 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 05:40:30.601838 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 05:40:30.656446 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 05:40:30.656721 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 05:40:30.660428 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 05:40:30.662792 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 05:40:30.665688 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 05:40:30.672815 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 05:40:30.693036 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 05:40:30.699711 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 05:40:30.719216 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 05:40:30.721767 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 05:40:30.723029 systemd[1]: Stopped target timers.target - Timer Units. May 13 05:40:30.725215 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 05:40:30.725365 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 05:40:30.727771 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 05:40:30.728958 systemd[1]: Stopped target basic.target - Basic System. May 13 05:40:30.732050 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 05:40:30.734656 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 05:40:30.737099 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 05:40:30.740049 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 05:40:30.742898 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 05:40:30.745870 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 05:40:30.748718 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 05:40:30.751623 systemd[1]: Stopped target swap.target - Swaps. May 13 05:40:30.754232 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 05:40:30.754547 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 05:40:30.757576 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 05:40:30.759411 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 05:40:30.761974 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 05:40:30.762825 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 05:40:30.764976 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 05:40:30.765239 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 05:40:30.769123 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 05:40:30.769396 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 05:40:30.771247 systemd[1]: ignition-files.service: Deactivated successfully. May 13 05:40:30.771530 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 05:40:30.780981 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 05:40:30.783267 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 05:40:30.783694 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 05:40:30.792043 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 05:40:30.794293 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 05:40:30.795729 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 05:40:30.800250 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 05:40:30.800406 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 05:40:30.808790 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 05:40:30.809063 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 05:40:30.821492 ignition[953]: INFO : Ignition 2.19.0 May 13 05:40:30.821492 ignition[953]: INFO : Stage: umount May 13 05:40:30.822802 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 05:40:30.822802 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 05:40:30.825141 ignition[953]: INFO : umount: umount passed May 13 05:40:30.825141 ignition[953]: INFO : Ignition finished successfully May 13 05:40:30.824352 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 05:40:30.825517 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 05:40:30.826748 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 05:40:30.826823 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 05:40:30.830173 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 05:40:30.830218 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 05:40:30.831443 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 05:40:30.831505 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 05:40:30.832603 systemd[1]: Stopped target network.target - Network. May 13 05:40:30.834227 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 05:40:30.834278 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 05:40:30.834821 systemd[1]: Stopped target paths.target - Path Units. May 13 05:40:30.835283 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 05:40:30.838685 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 05:40:30.839617 systemd[1]: Stopped target slices.target - Slice Units. May 13 05:40:30.840787 systemd[1]: Stopped target sockets.target - Socket Units. May 13 05:40:30.841818 systemd[1]: iscsid.socket: Deactivated successfully. May 13 05:40:30.841861 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 05:40:30.842945 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 05:40:30.842995 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 05:40:30.844220 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 05:40:30.844266 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 05:40:30.845254 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 05:40:30.845294 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 05:40:30.846429 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 05:40:30.847642 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 05:40:30.849633 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 05:40:30.850186 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 05:40:30.850281 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 05:40:30.851892 systemd-networkd[709]: eth0: DHCPv6 lease lost May 13 05:40:30.852279 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 05:40:30.852345 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 05:40:30.854156 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 05:40:30.854250 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 05:40:30.855652 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 05:40:30.855718 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 05:40:30.864744 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 05:40:30.866331 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 05:40:30.866392 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 05:40:30.867080 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 05:40:30.867887 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 05:40:30.868004 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 05:40:30.872276 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 05:40:30.872345 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 05:40:30.874412 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 05:40:30.874503 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 05:40:30.875048 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 05:40:30.875089 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 05:40:30.881564 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 05:40:30.881698 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 05:40:30.882892 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 05:40:30.883006 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 05:40:30.884393 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 05:40:30.884450 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 05:40:30.885671 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 05:40:30.885704 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 05:40:30.886713 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 05:40:30.886756 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 05:40:30.888375 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 05:40:30.888415 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 05:40:30.889564 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 05:40:30.889606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 05:40:30.897676 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 05:40:30.898933 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 05:40:30.899026 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 05:40:30.899667 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 05:40:30.899716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 05:40:30.907129 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 05:40:30.907242 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 05:40:30.908790 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 05:40:30.918662 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 05:40:30.926021 systemd[1]: Switching root. May 13 05:40:30.960727 systemd-journald[184]: Journal stopped May 13 05:40:32.714799 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 13 05:40:32.714870 kernel: SELinux: policy capability network_peer_controls=1 May 13 05:40:32.714890 kernel: SELinux: policy capability open_perms=1 May 13 05:40:32.714903 kernel: SELinux: policy capability extended_socket_class=1 May 13 05:40:32.714915 kernel: SELinux: policy capability always_check_network=0 May 13 05:40:32.714930 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 05:40:32.714942 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 05:40:32.714983 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 05:40:32.715001 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 05:40:32.715013 kernel: audit: type=1403 audit(1747114831.622:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 05:40:32.715026 systemd[1]: Successfully loaded SELinux policy in 79.953ms. May 13 05:40:32.715045 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.548ms. May 13 05:40:32.715059 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 05:40:32.715072 systemd[1]: Detected virtualization kvm. May 13 05:40:32.715088 systemd[1]: Detected architecture x86-64. May 13 05:40:32.715100 systemd[1]: Detected first boot. May 13 05:40:32.715113 systemd[1]: Hostname set to . May 13 05:40:32.715126 systemd[1]: Initializing machine ID from VM UUID. May 13 05:40:32.715138 zram_generator::config[995]: No configuration found. May 13 05:40:32.715152 systemd[1]: Populated /etc with preset unit settings. May 13 05:40:32.715165 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 05:40:32.715179 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 05:40:32.715192 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 05:40:32.715205 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 05:40:32.715219 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 05:40:32.715232 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 05:40:32.715245 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 05:40:32.715258 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 05:40:32.715271 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 05:40:32.715284 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 05:40:32.715301 systemd[1]: Created slice user.slice - User and Session Slice. May 13 05:40:32.715315 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 05:40:32.715332 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 05:40:32.715345 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 05:40:32.715358 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 05:40:32.715371 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 05:40:32.715385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 05:40:32.715398 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 05:40:32.715413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 05:40:32.715425 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 05:40:32.715438 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 05:40:32.715451 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 05:40:32.715488 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 05:40:32.715504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 05:40:32.715524 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 05:40:32.715539 systemd[1]: Reached target slices.target - Slice Units. May 13 05:40:32.715552 systemd[1]: Reached target swap.target - Swaps. May 13 05:40:32.715564 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 05:40:32.715578 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 05:40:32.715590 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 05:40:32.715603 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 05:40:32.715615 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 05:40:32.715633 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 05:40:32.715646 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 05:40:32.715661 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 05:40:32.715673 systemd[1]: Mounting media.mount - External Media Directory... May 13 05:40:32.715686 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 05:40:32.715700 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 05:40:32.715714 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 05:40:32.715727 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 05:40:32.715740 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 05:40:32.715752 systemd[1]: Reached target machines.target - Containers. May 13 05:40:32.715766 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 05:40:32.715778 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 05:40:32.715791 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 05:40:32.715803 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 05:40:32.715815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 05:40:32.715827 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 05:40:32.715839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 05:40:32.715851 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 05:40:32.715862 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 05:40:32.715877 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 05:40:32.715890 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 05:40:32.715901 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 05:40:32.715913 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 05:40:32.715925 systemd[1]: Stopped systemd-fsck-usr.service. May 13 05:40:32.715937 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 05:40:32.715949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 05:40:32.715961 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 05:40:32.715973 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 05:40:32.715986 kernel: loop: module loaded May 13 05:40:32.715997 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 05:40:32.716010 systemd[1]: verity-setup.service: Deactivated successfully. May 13 05:40:32.716023 systemd[1]: Stopped verity-setup.service. May 13 05:40:32.716035 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 05:40:32.716048 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 05:40:32.716060 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 05:40:32.716072 systemd[1]: Mounted media.mount - External Media Directory. May 13 05:40:32.716086 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 05:40:32.716098 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 05:40:32.716111 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 05:40:32.716122 kernel: ACPI: bus type drm_connector registered May 13 05:40:32.716134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 05:40:32.716147 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 05:40:32.716159 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 05:40:32.716171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 05:40:32.716183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 05:40:32.716195 kernel: fuse: init (API version 7.39) May 13 05:40:32.716222 systemd-journald[1084]: Collecting audit messages is disabled. May 13 05:40:32.716247 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 05:40:32.716260 systemd-journald[1084]: Journal started May 13 05:40:32.716286 systemd-journald[1084]: Runtime Journal (/run/log/journal/e2ab151f0ee94b44a3f5b6699dfa5707) is 8.0M, max 78.3M, 70.3M free. May 13 05:40:32.349754 systemd[1]: Queued start job for default target multi-user.target. May 13 05:40:32.375527 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 05:40:32.375901 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 05:40:32.720610 systemd[1]: Started systemd-journald.service - Journal Service. May 13 05:40:32.721312 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 05:40:32.721593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 05:40:32.722377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 05:40:32.722655 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 05:40:32.723517 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 05:40:32.723751 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 05:40:32.724604 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 05:40:32.724829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 05:40:32.725766 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 05:40:32.726750 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 05:40:32.727874 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 05:40:32.738348 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 05:40:32.744577 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 05:40:32.750795 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 05:40:32.751871 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 05:40:32.751969 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 05:40:32.753801 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 05:40:32.757578 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 05:40:32.773634 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 05:40:32.774627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 05:40:32.778673 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 05:40:32.790627 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 05:40:32.791271 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 05:40:32.795657 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 05:40:32.796293 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 05:40:32.797508 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 05:40:32.802025 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 05:40:32.805559 systemd-journald[1084]: Time spent on flushing to /var/log/journal/e2ab151f0ee94b44a3f5b6699dfa5707 is 40.862ms for 945 entries. May 13 05:40:32.805559 systemd-journald[1084]: System Journal (/var/log/journal/e2ab151f0ee94b44a3f5b6699dfa5707) is 8.0M, max 584.8M, 576.8M free. May 13 05:40:32.881693 systemd-journald[1084]: Received client request to flush runtime journal. May 13 05:40:32.881741 kernel: loop0: detected capacity change from 0 to 205544 May 13 05:40:32.810699 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 05:40:32.816697 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 05:40:32.818176 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 05:40:32.819855 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 05:40:32.820915 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 05:40:32.837695 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 05:40:32.858333 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 05:40:32.859094 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 05:40:32.867667 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 05:40:32.868642 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 05:40:32.890539 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 05:40:32.893538 udevadm[1136]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 05:40:32.939014 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 05:40:32.938499 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 05:40:32.944645 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 05:40:32.969948 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 05:40:32.972271 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 05:40:32.992522 kernel: loop1: detected capacity change from 0 to 140768 May 13 05:40:32.997378 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. May 13 05:40:32.997698 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. May 13 05:40:33.008387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 05:40:33.077868 kernel: loop2: detected capacity change from 0 to 8 May 13 05:40:33.103538 kernel: loop3: detected capacity change from 0 to 142488 May 13 05:40:33.169924 kernel: loop4: detected capacity change from 0 to 205544 May 13 05:40:33.270545 kernel: loop5: detected capacity change from 0 to 140768 May 13 05:40:33.396611 kernel: loop6: detected capacity change from 0 to 8 May 13 05:40:33.402636 kernel: loop7: detected capacity change from 0 to 142488 May 13 05:40:33.478074 (sd-merge)[1154]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 13 05:40:33.479555 (sd-merge)[1154]: Merged extensions into '/usr'. May 13 05:40:33.486546 systemd[1]: Reloading requested from client PID 1128 ('systemd-sysext') (unit systemd-sysext.service)... May 13 05:40:33.486564 systemd[1]: Reloading... May 13 05:40:33.573495 zram_generator::config[1176]: No configuration found. May 13 05:40:33.793142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 05:40:33.855400 systemd[1]: Reloading finished in 368 ms. May 13 05:40:33.896939 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 05:40:33.898085 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 05:40:33.907725 systemd[1]: Starting ensure-sysext.service... May 13 05:40:33.909782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 05:40:33.913621 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 05:40:33.934910 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... May 13 05:40:33.934929 systemd[1]: Reloading... May 13 05:40:33.953776 systemd-udevd[1238]: Using default interface naming scheme 'v255'. May 13 05:40:33.968524 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 05:40:33.969150 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 05:40:33.974513 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 05:40:33.975305 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. May 13 05:40:33.975594 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. May 13 05:40:34.002581 zram_generator::config[1263]: No configuration found. May 13 05:40:34.003053 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. May 13 05:40:34.003146 systemd-tmpfiles[1237]: Skipping /boot May 13 05:40:34.013106 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. May 13 05:40:34.013216 systemd-tmpfiles[1237]: Skipping /boot May 13 05:40:34.175808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 05:40:34.209016 ldconfig[1123]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 05:40:34.270384 systemd[1]: Reloading finished in 335 ms. May 13 05:40:34.287340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 05:40:34.291866 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 05:40:34.302916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 05:40:34.327155 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 05:40:34.344490 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 05:40:34.348389 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 05:40:34.349673 kernel: ACPI: button: Power Button [PWRF] May 13 05:40:34.349717 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1320) May 13 05:40:34.356106 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 05:40:34.362059 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 05:40:34.364433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 05:40:34.373171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 05:40:34.379019 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 05:40:34.385736 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 05:40:34.386869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 05:40:34.397621 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 05:40:34.398762 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 05:40:34.402504 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 05:40:34.406026 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 05:40:34.407952 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 05:40:34.409660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 05:40:34.413124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 05:40:34.413332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 05:40:34.425215 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 05:40:34.426672 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 05:40:34.426870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 05:40:34.428325 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 05:40:34.428872 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 05:40:34.436812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 05:40:34.439597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 05:40:34.440259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 05:40:34.440340 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 05:40:34.440836 systemd[1]: Finished ensure-sysext.service. May 13 05:40:34.453521 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 05:40:34.454502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 05:40:34.454705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 05:40:34.459649 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 05:40:34.468691 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 05:40:34.481091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 05:40:34.481298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 05:40:34.482301 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 05:40:34.484857 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 05:40:34.488541 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 05:40:34.500647 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 05:40:34.504870 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 05:40:34.510693 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 05:40:34.522068 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 05:40:34.522545 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 05:40:34.562611 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 05:40:34.567613 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 13 05:40:34.567675 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 13 05:40:34.567110 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 05:40:34.571599 kernel: Console: switching to colour dummy device 80x25 May 13 05:40:34.573063 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 13 05:40:34.573181 kernel: [drm] features: -context_init May 13 05:40:34.574496 kernel: [drm] number of scanouts: 1 May 13 05:40:34.574559 kernel: [drm] number of cap sets: 0 May 13 05:40:34.576494 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 13 05:40:34.579734 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 05:40:34.586669 augenrules[1393]: No rules May 13 05:40:34.594813 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 13 05:40:34.594859 kernel: Console: switching to colour frame buffer device 160x50 May 13 05:40:34.605491 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 13 05:40:34.609663 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 05:40:34.609962 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 05:40:34.623836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 05:40:34.625798 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 05:40:34.643769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 05:40:34.643957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 05:40:34.653793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 05:40:34.664568 kernel: mousedev: PS/2 mouse device common for all mice May 13 05:40:34.707347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 05:40:34.707608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 05:40:34.714646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 05:40:34.718130 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 05:40:34.720771 systemd-networkd[1363]: lo: Link UP May 13 05:40:34.721379 systemd-networkd[1363]: lo: Gained carrier May 13 05:40:34.722696 systemd-networkd[1363]: Enumeration completed May 13 05:40:34.726687 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 05:40:34.728333 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 05:40:34.728339 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 05:40:34.729094 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 05:40:34.732604 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 05:40:34.734686 systemd-networkd[1363]: eth0: Link UP May 13 05:40:34.734694 systemd-networkd[1363]: eth0: Gained carrier May 13 05:40:34.734711 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 05:40:34.746523 systemd-networkd[1363]: eth0: DHCPv4 address 172.24.4.224/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 05:40:34.756896 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 05:40:34.782906 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 05:40:34.784230 systemd-resolved[1364]: Positive Trust Anchors: May 13 05:40:34.784496 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 05:40:34.784604 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 05:40:34.784810 systemd[1]: Reached target time-set.target - System Time Set. May 13 05:40:34.788853 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 05:40:34.790032 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 05:40:34.793573 systemd-resolved[1364]: Using system hostname 'ci-4081-3-3-n-f146884e63.novalocal'. May 13 05:40:34.796756 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 05:40:34.797720 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 05:40:34.797860 systemd[1]: Reached target network.target - Network. May 13 05:40:34.797915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 05:40:34.805053 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 05:40:34.827182 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 05:40:34.841903 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 05:40:34.844023 systemd[1]: Reached target sysinit.target - System Initialization. May 13 05:40:34.844904 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 05:40:34.847239 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 05:40:34.849768 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 05:40:34.852226 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 05:40:34.854491 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 05:40:34.856755 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 05:40:34.856858 systemd[1]: Reached target paths.target - Path Units. May 13 05:40:34.859184 systemd[1]: Reached target timers.target - Timer Units. May 13 05:40:34.862940 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 05:40:34.866866 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 05:40:34.891706 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 05:40:34.895313 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 05:40:34.896669 systemd[1]: Reached target sockets.target - Socket Units. May 13 05:40:34.897686 systemd[1]: Reached target basic.target - Basic System. May 13 05:40:34.898708 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 05:40:34.898842 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 05:40:34.900453 systemd[1]: Starting containerd.service - containerd container runtime... May 13 05:40:34.920215 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 05:40:34.924750 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 05:40:34.933027 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 05:40:34.938131 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 05:40:34.940851 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 05:40:34.943360 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 05:40:34.957665 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 05:40:34.966117 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 05:40:34.974254 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 05:40:34.987656 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 05:40:34.990393 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 05:40:34.990981 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 05:40:34.997705 systemd[1]: Starting update-engine.service - Update Engine... May 13 05:40:35.005585 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 05:40:35.013282 jq[1433]: false May 13 05:40:35.013924 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 05:40:35.014133 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 05:40:36.217045 systemd-timesyncd[1372]: Contacted time server 72.30.35.88:123 (0.flatcar.pool.ntp.org). May 13 05:40:36.217108 systemd-timesyncd[1372]: Initial clock synchronization to Tue 2025-05-13 05:40:36.216908 UTC. May 13 05:40:36.218672 systemd-resolved[1364]: Clock change detected. Flushing caches. May 13 05:40:36.224934 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 05:40:36.228426 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 05:40:36.251449 systemd[1]: motdgen.service: Deactivated successfully. May 13 05:40:36.251646 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 05:40:36.252912 update_engine[1443]: I20250513 05:40:36.252844 1443 main.cc:92] Flatcar Update Engine starting May 13 05:40:36.254854 jq[1444]: true May 13 05:40:36.262139 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 05:40:36.270140 extend-filesystems[1434]: Found loop4 May 13 05:40:36.274060 extend-filesystems[1434]: Found loop5 May 13 05:40:36.274060 extend-filesystems[1434]: Found loop6 May 13 05:40:36.274060 extend-filesystems[1434]: Found loop7 May 13 05:40:36.274060 extend-filesystems[1434]: Found vda May 13 05:40:36.274060 extend-filesystems[1434]: Found vda1 May 13 05:40:36.274060 extend-filesystems[1434]: Found vda2 May 13 05:40:36.274060 extend-filesystems[1434]: Found vda3 May 13 05:40:36.274060 extend-filesystems[1434]: Found usr May 13 05:40:36.274060 extend-filesystems[1434]: Found vda4 May 13 05:40:36.274060 extend-filesystems[1434]: Found vda6 May 13 05:40:36.274060 extend-filesystems[1434]: Found vda7 May 13 05:40:36.274060 extend-filesystems[1434]: Found vda9 May 13 05:40:36.274060 extend-filesystems[1434]: Checking size of /dev/vda9 May 13 05:40:36.334113 tar[1448]: linux-amd64/helm May 13 05:40:36.306960 systemd-logind[1441]: New seat seat0. May 13 05:40:36.334619 jq[1462]: true May 13 05:40:36.325955 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) May 13 05:40:36.325972 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 05:40:36.330525 systemd[1]: Started systemd-logind.service - User Login Management. May 13 05:40:36.369362 extend-filesystems[1434]: Resized partition /dev/vda9 May 13 05:40:36.393922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1336) May 13 05:40:36.393987 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) May 13 05:40:36.512503 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 05:40:36.514280 dbus-daemon[1432]: [system] SELinux support is enabled May 13 05:40:36.616269 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 05:40:36.514778 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 05:40:36.616416 update_engine[1443]: I20250513 05:40:36.546442 1443 update_check_scheduler.cc:74] Next update check in 11m39s May 13 05:40:36.541997 dbus-daemon[1432]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 05:40:36.524740 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 05:40:36.524766 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 05:40:36.529484 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 05:40:36.529509 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 05:40:36.543029 systemd[1]: Started update-engine.service - Update Engine. May 13 05:40:36.553710 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 05:40:36.622852 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 05:40:36.622852 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 05:40:36.622852 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 05:40:36.621589 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 05:40:36.632908 bash[1487]: Updated "/home/core/.ssh/authorized_keys" May 13 05:40:36.633039 extend-filesystems[1434]: Resized filesystem in /dev/vda9 May 13 05:40:36.622258 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 05:40:36.636019 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 05:40:36.650494 systemd[1]: Starting sshkeys.service... May 13 05:40:36.683705 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 05:40:36.688989 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 05:40:36.701991 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 05:40:36.885629 containerd[1460]: time="2025-05-13T05:40:36.885558172Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 05:40:36.948511 containerd[1460]: time="2025-05-13T05:40:36.948294023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.949811349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.949843199Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.949860070Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.950034478Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.950055056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.950122753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.950140326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.950333629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.950352724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.950368153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 05:40:36.952209 containerd[1460]: time="2025-05-13T05:40:36.950380076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 05:40:36.952465 containerd[1460]: time="2025-05-13T05:40:36.950458623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 05:40:36.952465 containerd[1460]: time="2025-05-13T05:40:36.950679367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 05:40:36.952465 containerd[1460]: time="2025-05-13T05:40:36.950789343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 05:40:36.952465 containerd[1460]: time="2025-05-13T05:40:36.950807097Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 05:40:36.952465 containerd[1460]: time="2025-05-13T05:40:36.950891084Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 05:40:36.952465 containerd[1460]: time="2025-05-13T05:40:36.950940657Z" level=info msg="metadata content store policy set" policy=shared May 13 05:40:36.966599 containerd[1460]: time="2025-05-13T05:40:36.966577354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 05:40:36.966702 containerd[1460]: time="2025-05-13T05:40:36.966686559Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 05:40:36.966801 containerd[1460]: time="2025-05-13T05:40:36.966785093Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 05:40:36.966869 containerd[1460]: time="2025-05-13T05:40:36.966855315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 05:40:36.966929 containerd[1460]: time="2025-05-13T05:40:36.966914827Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 05:40:36.967103 containerd[1460]: time="2025-05-13T05:40:36.967084335Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 05:40:36.967410 containerd[1460]: time="2025-05-13T05:40:36.967391220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 05:40:36.967562 containerd[1460]: time="2025-05-13T05:40:36.967544387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969225711Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969249716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969266046Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969280113Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969292807Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969306512Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969320819Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969334375Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969349613Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969361876Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969381252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969395199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969408383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969663 containerd[1460]: time="2025-05-13T05:40:36.969422109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969434813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969451314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969464469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969478725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969491920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969510475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969524481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969537736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969551011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969570036Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969593090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969606355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 05:40:36.969961 containerd[1460]: time="2025-05-13T05:40:36.969618237Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 05:40:36.971180 containerd[1460]: time="2025-05-13T05:40:36.970279577Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 05:40:36.971180 containerd[1460]: time="2025-05-13T05:40:36.970307389Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 05:40:36.971180 containerd[1460]: time="2025-05-13T05:40:36.970368434Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 05:40:36.971180 containerd[1460]: time="2025-05-13T05:40:36.970387680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 05:40:36.971180 containerd[1460]: time="2025-05-13T05:40:36.970399162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 05:40:36.971180 containerd[1460]: time="2025-05-13T05:40:36.970412156Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 05:40:36.971180 containerd[1460]: time="2025-05-13T05:40:36.970422756Z" level=info msg="NRI interface is disabled by configuration." May 13 05:40:36.971180 containerd[1460]: time="2025-05-13T05:40:36.970433606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 05:40:36.971400 containerd[1460]: time="2025-05-13T05:40:36.970708682Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 05:40:36.971400 containerd[1460]: time="2025-05-13T05:40:36.970780266Z" level=info msg="Connect containerd service" May 13 05:40:36.971400 containerd[1460]: time="2025-05-13T05:40:36.970809471Z" level=info msg="using legacy CRI server" May 13 05:40:36.971400 containerd[1460]: time="2025-05-13T05:40:36.970816815Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 05:40:36.971400 containerd[1460]: time="2025-05-13T05:40:36.970925869Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 05:40:36.973567 containerd[1460]: time="2025-05-13T05:40:36.973543208Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 05:40:36.973761 containerd[1460]: time="2025-05-13T05:40:36.973727414Z" level=info msg="Start subscribing containerd event" May 13 05:40:36.974136 containerd[1460]: time="2025-05-13T05:40:36.973822362Z" level=info msg="Start recovering state" May 13 05:40:36.974136 containerd[1460]: time="2025-05-13T05:40:36.973877796Z" level=info msg="Start event monitor" May 13 05:40:36.974136 containerd[1460]: time="2025-05-13T05:40:36.973890059Z" level=info msg="Start snapshots syncer" May 13 05:40:36.974136 containerd[1460]: time="2025-05-13T05:40:36.973898945Z" level=info msg="Start cni network conf syncer for default" May 13 05:40:36.974136 containerd[1460]: time="2025-05-13T05:40:36.973907181Z" level=info msg="Start streaming server" May 13 05:40:36.974443 containerd[1460]: time="2025-05-13T05:40:36.974424832Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 05:40:36.974544 containerd[1460]: time="2025-05-13T05:40:36.974528105Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 05:40:36.976712 containerd[1460]: time="2025-05-13T05:40:36.976266115Z" level=info msg="containerd successfully booted in 0.093728s" May 13 05:40:36.976366 systemd[1]: Started containerd.service - containerd container runtime. May 13 05:40:37.059883 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 05:40:37.086706 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 05:40:37.098379 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 05:40:37.106675 systemd[1]: issuegen.service: Deactivated successfully. May 13 05:40:37.106847 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 05:40:37.117648 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 05:40:37.119891 tar[1448]: linux-amd64/LICENSE May 13 05:40:37.119891 tar[1448]: linux-amd64/README.md May 13 05:40:37.132369 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 05:40:37.139081 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 05:40:37.150756 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 05:40:37.162053 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 05:40:37.164030 systemd[1]: Reached target getty.target - Login Prompts. May 13 05:40:37.326494 systemd-networkd[1363]: eth0: Gained IPv6LL May 13 05:40:37.334124 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 05:40:37.339842 systemd[1]: Reached target network-online.target - Network is Online. May 13 05:40:37.350728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 05:40:37.370990 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 05:40:37.419380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 05:40:39.587712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 05:40:39.614807 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 05:40:41.080223 kubelet[1546]: E0513 05:40:41.080084 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 05:40:41.083655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 05:40:41.083977 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 05:40:41.084615 systemd[1]: kubelet.service: Consumed 2.329s CPU time. May 13 05:40:42.282690 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 05:40:42.290827 systemd[1]: Started sshd@0-172.24.4.224:22-172.24.4.1:46118.service - OpenSSH per-connection server daemon (172.24.4.1:46118). May 13 05:40:42.299270 login[1525]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 05:40:42.303490 login[1526]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 05:40:42.332111 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 05:40:42.340997 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 05:40:42.347722 systemd-logind[1441]: New session 1 of user core. May 13 05:40:42.359586 systemd-logind[1441]: New session 2 of user core. May 13 05:40:42.377383 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 05:40:42.384856 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 05:40:42.395697 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 05:40:42.708262 systemd[1562]: Queued start job for default target default.target. May 13 05:40:42.720144 systemd[1562]: Created slice app.slice - User Application Slice. May 13 05:40:42.720516 systemd[1562]: Reached target paths.target - Paths. May 13 05:40:42.720533 systemd[1562]: Reached target timers.target - Timers. May 13 05:40:42.722037 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 05:40:42.759446 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 05:40:42.759958 systemd[1562]: Reached target sockets.target - Sockets. May 13 05:40:42.760157 systemd[1562]: Reached target basic.target - Basic System. May 13 05:40:42.760471 systemd[1562]: Reached target default.target - Main User Target. May 13 05:40:42.760573 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 05:40:42.761877 systemd[1562]: Startup finished in 354ms. May 13 05:40:42.768743 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 05:40:42.770806 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 05:40:43.228625 coreos-metadata[1431]: May 13 05:40:43.228 WARN failed to locate config-drive, using the metadata service API instead May 13 05:40:43.282580 coreos-metadata[1431]: May 13 05:40:43.282 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 13 05:40:43.733666 coreos-metadata[1431]: May 13 05:40:43.733 INFO Fetch successful May 13 05:40:43.733666 coreos-metadata[1431]: May 13 05:40:43.733 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 05:40:43.746768 coreos-metadata[1431]: May 13 05:40:43.746 INFO Fetch successful May 13 05:40:43.746996 coreos-metadata[1431]: May 13 05:40:43.746 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 13 05:40:43.761507 coreos-metadata[1431]: May 13 05:40:43.761 INFO Fetch successful May 13 05:40:43.761507 coreos-metadata[1431]: May 13 05:40:43.761 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 13 05:40:43.776338 coreos-metadata[1431]: May 13 05:40:43.776 INFO Fetch successful May 13 05:40:43.776813 coreos-metadata[1431]: May 13 05:40:43.776 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 13 05:40:43.791448 coreos-metadata[1431]: May 13 05:40:43.791 INFO Fetch successful May 13 05:40:43.791448 coreos-metadata[1431]: May 13 05:40:43.791 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 13 05:40:43.805049 coreos-metadata[1503]: May 13 05:40:43.804 WARN failed to locate config-drive, using the metadata service API instead May 13 05:40:43.821437 coreos-metadata[1431]: May 13 05:40:43.821 INFO Fetch successful May 13 05:40:43.861417 coreos-metadata[1503]: May 13 05:40:43.861 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 05:40:43.873362 coreos-metadata[1503]: May 13 05:40:43.873 INFO Fetch successful May 13 05:40:43.874533 coreos-metadata[1503]: May 13 05:40:43.874 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 05:40:43.884708 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 05:40:43.887062 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 05:40:43.888940 coreos-metadata[1503]: May 13 05:40:43.888 INFO Fetch successful May 13 05:40:43.894558 unknown[1503]: wrote ssh authorized keys file for user: core May 13 05:40:43.937557 sshd[1559]: Accepted publickey for core from 172.24.4.1 port 46118 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:40:43.940565 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:40:43.945009 update-ssh-keys[1603]: Updated "/home/core/.ssh/authorized_keys" May 13 05:40:43.947331 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 05:40:43.950499 systemd[1]: Finished sshkeys.service. May 13 05:40:43.954933 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 05:40:43.955323 systemd[1]: Startup finished in 1.209s (kernel) + 17.788s (initrd) + 11.218s (userspace) = 30.216s. May 13 05:40:43.961671 systemd-logind[1441]: New session 3 of user core. May 13 05:40:43.971508 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 05:40:44.436912 systemd[1]: Started sshd@1-172.24.4.224:22-172.24.4.1:37362.service - OpenSSH per-connection server daemon (172.24.4.1:37362). May 13 05:40:45.631056 sshd[1610]: Accepted publickey for core from 172.24.4.1 port 37362 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:40:45.633450 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:40:45.641986 systemd-logind[1441]: New session 4 of user core. May 13 05:40:45.652494 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 05:40:46.232643 sshd[1610]: pam_unix(sshd:session): session closed for user core May 13 05:40:46.245761 systemd[1]: sshd@1-172.24.4.224:22-172.24.4.1:37362.service: Deactivated successfully. May 13 05:40:46.247107 systemd[1]: session-4.scope: Deactivated successfully. May 13 05:40:46.248433 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. May 13 05:40:46.253564 systemd[1]: Started sshd@2-172.24.4.224:22-172.24.4.1:37374.service - OpenSSH per-connection server daemon (172.24.4.1:37374). May 13 05:40:46.255404 systemd-logind[1441]: Removed session 4. May 13 05:40:47.612191 sshd[1617]: Accepted publickey for core from 172.24.4.1 port 37374 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:40:47.614693 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:40:47.623544 systemd-logind[1441]: New session 5 of user core. May 13 05:40:47.631500 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 05:40:48.209690 sshd[1617]: pam_unix(sshd:session): session closed for user core May 13 05:40:48.229173 systemd[1]: sshd@2-172.24.4.224:22-172.24.4.1:37374.service: Deactivated successfully. May 13 05:40:48.233189 systemd[1]: session-5.scope: Deactivated successfully. May 13 05:40:48.237754 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. May 13 05:40:48.252821 systemd[1]: Started sshd@3-172.24.4.224:22-172.24.4.1:37386.service - OpenSSH per-connection server daemon (172.24.4.1:37386). May 13 05:40:48.258419 systemd-logind[1441]: Removed session 5. May 13 05:40:49.536092 sshd[1624]: Accepted publickey for core from 172.24.4.1 port 37386 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:40:49.539473 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:40:49.549037 systemd-logind[1441]: New session 6 of user core. May 13 05:40:49.561696 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 05:40:50.193836 sshd[1624]: pam_unix(sshd:session): session closed for user core May 13 05:40:50.208769 systemd[1]: sshd@3-172.24.4.224:22-172.24.4.1:37386.service: Deactivated successfully. May 13 05:40:50.211792 systemd[1]: session-6.scope: Deactivated successfully. May 13 05:40:50.214528 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. May 13 05:40:50.221835 systemd[1]: Started sshd@4-172.24.4.224:22-172.24.4.1:37402.service - OpenSSH per-connection server daemon (172.24.4.1:37402). May 13 05:40:50.224838 systemd-logind[1441]: Removed session 6. May 13 05:40:51.124838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 05:40:51.132606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 05:40:51.485662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 05:40:51.488733 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 05:40:51.514082 sshd[1631]: Accepted publickey for core from 172.24.4.1 port 37402 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:40:51.518015 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:40:51.529351 systemd-logind[1441]: New session 7 of user core. May 13 05:40:51.539759 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 05:40:51.576843 kubelet[1641]: E0513 05:40:51.576800 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 05:40:51.580134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 05:40:51.580504 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 05:40:51.907511 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 05:40:51.908277 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 05:40:51.923350 sudo[1650]: pam_unix(sudo:session): session closed for user root May 13 05:40:52.160772 sshd[1631]: pam_unix(sshd:session): session closed for user core May 13 05:40:52.170279 systemd[1]: sshd@4-172.24.4.224:22-172.24.4.1:37402.service: Deactivated successfully. May 13 05:40:52.173359 systemd[1]: session-7.scope: Deactivated successfully. May 13 05:40:52.176359 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. May 13 05:40:52.184788 systemd[1]: Started sshd@5-172.24.4.224:22-172.24.4.1:37412.service - OpenSSH per-connection server daemon (172.24.4.1:37412). May 13 05:40:52.187090 systemd-logind[1441]: Removed session 7. May 13 05:40:53.187912 sshd[1655]: Accepted publickey for core from 172.24.4.1 port 37412 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:40:53.191287 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:40:53.203334 systemd-logind[1441]: New session 8 of user core. May 13 05:40:53.214568 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 05:40:53.778622 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 05:40:53.779548 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 05:40:53.786678 sudo[1659]: pam_unix(sudo:session): session closed for user root May 13 05:40:53.797753 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 05:40:53.798979 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 05:40:53.828862 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 05:40:53.832103 auditctl[1662]: No rules May 13 05:40:53.832814 systemd[1]: audit-rules.service: Deactivated successfully. May 13 05:40:53.833179 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 05:40:53.841050 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 05:40:53.903684 augenrules[1680]: No rules May 13 05:40:53.906388 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 05:40:53.908142 sudo[1658]: pam_unix(sudo:session): session closed for user root May 13 05:40:54.084599 sshd[1655]: pam_unix(sshd:session): session closed for user core May 13 05:40:54.097113 systemd[1]: sshd@5-172.24.4.224:22-172.24.4.1:37412.service: Deactivated successfully. May 13 05:40:54.100110 systemd[1]: session-8.scope: Deactivated successfully. May 13 05:40:54.103840 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. May 13 05:40:54.110130 systemd[1]: Started sshd@6-172.24.4.224:22-172.24.4.1:43376.service - OpenSSH per-connection server daemon (172.24.4.1:43376). May 13 05:40:54.116849 systemd-logind[1441]: Removed session 8. May 13 05:40:55.322820 sshd[1688]: Accepted publickey for core from 172.24.4.1 port 43376 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:40:55.327359 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:40:55.340400 systemd-logind[1441]: New session 9 of user core. May 13 05:40:55.358566 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 05:40:55.741585 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 05:40:55.742925 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 05:40:56.358849 (dockerd)[1707]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 05:40:56.359049 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 05:40:56.975007 dockerd[1707]: time="2025-05-13T05:40:56.974518101Z" level=info msg="Starting up" May 13 05:40:57.145980 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1432395550-merged.mount: Deactivated successfully. May 13 05:40:57.201436 dockerd[1707]: time="2025-05-13T05:40:57.201316522Z" level=info msg="Loading containers: start." May 13 05:40:57.364452 kernel: Initializing XFRM netlink socket May 13 05:40:57.501271 systemd-networkd[1363]: docker0: Link UP May 13 05:40:57.518984 dockerd[1707]: time="2025-05-13T05:40:57.518940185Z" level=info msg="Loading containers: done." May 13 05:40:57.542314 dockerd[1707]: time="2025-05-13T05:40:57.542250323Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 05:40:57.542433 dockerd[1707]: time="2025-05-13T05:40:57.542368655Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 05:40:57.542521 dockerd[1707]: time="2025-05-13T05:40:57.542492457Z" level=info msg="Daemon has completed initialization" May 13 05:40:57.585421 dockerd[1707]: time="2025-05-13T05:40:57.585078628Z" level=info msg="API listen on /run/docker.sock" May 13 05:40:57.585392 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 05:40:59.226734 containerd[1460]: time="2025-05-13T05:40:59.225884258Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 05:41:00.046780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863362446.mount: Deactivated successfully. May 13 05:41:01.624541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 05:41:01.631658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 05:41:01.751329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 05:41:01.756017 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 05:41:02.029946 containerd[1460]: time="2025-05-13T05:41:02.029673244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:02.035289 containerd[1460]: time="2025-05-13T05:41:02.035075025Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" May 13 05:41:02.040476 containerd[1460]: time="2025-05-13T05:41:02.040272232Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:02.057254 containerd[1460]: time="2025-05-13T05:41:02.055033467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:02.059876 containerd[1460]: time="2025-05-13T05:41:02.057167780Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.831195898s" May 13 05:41:02.060107 containerd[1460]: time="2025-05-13T05:41:02.060067869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 05:41:02.065253 containerd[1460]: time="2025-05-13T05:41:02.065130794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 05:41:02.071002 kubelet[1907]: E0513 05:41:02.070934 1907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 05:41:02.075993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 05:41:02.076379 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 05:41:04.046646 containerd[1460]: time="2025-05-13T05:41:04.046589830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:04.048481 containerd[1460]: time="2025-05-13T05:41:04.048447825Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" May 13 05:41:04.050256 containerd[1460]: time="2025-05-13T05:41:04.050232121Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:04.052950 containerd[1460]: time="2025-05-13T05:41:04.052925022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:04.054440 containerd[1460]: time="2025-05-13T05:41:04.054413453Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.98788622s" May 13 05:41:04.054586 containerd[1460]: time="2025-05-13T05:41:04.054517919Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 05:41:04.055545 containerd[1460]: time="2025-05-13T05:41:04.055526892Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 05:41:06.107072 containerd[1460]: time="2025-05-13T05:41:06.106934005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:06.110151 containerd[1460]: time="2025-05-13T05:41:06.109775565Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" May 13 05:41:06.113258 containerd[1460]: time="2025-05-13T05:41:06.111878469Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:06.121624 containerd[1460]: time="2025-05-13T05:41:06.121539748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:06.125141 containerd[1460]: time="2025-05-13T05:41:06.125039282Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.068825352s" May 13 05:41:06.125455 containerd[1460]: time="2025-05-13T05:41:06.125408384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 05:41:06.129730 containerd[1460]: time="2025-05-13T05:41:06.129660789Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 05:41:07.598152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742411738.mount: Deactivated successfully. May 13 05:41:08.167856 containerd[1460]: time="2025-05-13T05:41:08.167805469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:08.169097 containerd[1460]: time="2025-05-13T05:41:08.169026720Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" May 13 05:41:08.170315 containerd[1460]: time="2025-05-13T05:41:08.170267507Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:08.172902 containerd[1460]: time="2025-05-13T05:41:08.172856262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:08.174528 containerd[1460]: time="2025-05-13T05:41:08.173564009Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.043835233s" May 13 05:41:08.174528 containerd[1460]: time="2025-05-13T05:41:08.173598634Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 05:41:08.175103 containerd[1460]: time="2025-05-13T05:41:08.175063782Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 05:41:08.863807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291751396.mount: Deactivated successfully. May 13 05:41:10.684133 containerd[1460]: time="2025-05-13T05:41:10.684087180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:10.685765 containerd[1460]: time="2025-05-13T05:41:10.685724685Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 13 05:41:10.686992 containerd[1460]: time="2025-05-13T05:41:10.686960977Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:10.692224 containerd[1460]: time="2025-05-13T05:41:10.691294203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:10.692301 containerd[1460]: time="2025-05-13T05:41:10.692257958Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.516968622s" May 13 05:41:10.692339 containerd[1460]: time="2025-05-13T05:41:10.692324224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 05:41:10.693249 containerd[1460]: time="2025-05-13T05:41:10.693226220Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 05:41:11.394847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081403152.mount: Deactivated successfully. May 13 05:41:11.408391 containerd[1460]: time="2025-05-13T05:41:11.408070889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:11.410260 containerd[1460]: time="2025-05-13T05:41:11.410017730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 13 05:41:11.412387 containerd[1460]: time="2025-05-13T05:41:11.412299478Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:11.417525 containerd[1460]: time="2025-05-13T05:41:11.417399054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:11.420120 containerd[1460]: time="2025-05-13T05:41:11.419439603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 726.172546ms" May 13 05:41:11.420120 containerd[1460]: time="2025-05-13T05:41:11.419504737Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 05:41:11.421304 containerd[1460]: time="2025-05-13T05:41:11.420948010Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 05:41:12.102910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397538012.mount: Deactivated successfully. May 13 05:41:12.106858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 05:41:12.120574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 05:41:12.364293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 05:41:12.377610 (kubelet)[1994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 05:41:12.548840 kubelet[1994]: E0513 05:41:12.548780 1994 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 05:41:12.550534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 05:41:12.550798 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 05:41:15.419574 containerd[1460]: time="2025-05-13T05:41:15.419371602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:15.424918 containerd[1460]: time="2025-05-13T05:41:15.424590243Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" May 13 05:41:15.430244 containerd[1460]: time="2025-05-13T05:41:15.428651922Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:15.580464 containerd[1460]: time="2025-05-13T05:41:15.580387672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:15.585007 containerd[1460]: time="2025-05-13T05:41:15.584908381Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.163894926s" May 13 05:41:15.586327 containerd[1460]: time="2025-05-13T05:41:15.586265814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 05:41:20.113310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 05:41:20.126967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 05:41:20.186148 systemd[1]: Reloading requested from client PID 2074 ('systemctl') (unit session-9.scope)... May 13 05:41:20.186184 systemd[1]: Reloading... May 13 05:41:20.305480 zram_generator::config[2113]: No configuration found. May 13 05:41:20.450387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 05:41:20.534745 systemd[1]: Reloading finished in 347 ms. May 13 05:41:20.581308 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 05:41:20.581584 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 05:41:20.582215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 05:41:20.584218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 05:41:20.691363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 05:41:20.700580 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 05:41:20.746705 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 05:41:20.747272 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 05:41:20.747272 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 05:41:20.915367 kubelet[2178]: I0513 05:41:20.915250 2178 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 05:41:21.503244 kubelet[2178]: I0513 05:41:21.502418 2178 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 05:41:21.503244 kubelet[2178]: I0513 05:41:21.502474 2178 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 05:41:21.503386 kubelet[2178]: I0513 05:41:21.503360 2178 server.go:929] "Client rotation is on, will bootstrap in background" May 13 05:41:21.746107 update_engine[1443]: I20250513 05:41:21.746008 1443 update_attempter.cc:509] Updating boot flags... May 13 05:41:22.079783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2193) May 13 05:41:22.104273 kubelet[2178]: E0513 05:41:22.103757 2178 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.224:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.224:6443: connect: connection refused" logger="UnhandledError" May 13 05:41:22.111246 kubelet[2178]: I0513 05:41:22.107083 2178 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 05:41:22.130445 kubelet[2178]: E0513 05:41:22.130405 2178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 05:41:22.130744 kubelet[2178]: I0513 05:41:22.130720 2178 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 05:41:22.135950 kubelet[2178]: I0513 05:41:22.135914 2178 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 05:41:22.137900 kubelet[2178]: I0513 05:41:22.137872 2178 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 05:41:22.138039 kubelet[2178]: I0513 05:41:22.138009 2178 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 05:41:22.138231 kubelet[2178]: I0513 05:41:22.138034 2178 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-f146884e63.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 05:41:22.138231 kubelet[2178]: I0513 05:41:22.138226 2178 topology_manager.go:138] "Creating topology manager with none policy" May 13 05:41:22.138231 kubelet[2178]: I0513 05:41:22.138237 2178 container_manager_linux.go:300] "Creating device plugin manager" May 13 05:41:22.138449 kubelet[2178]: I0513 05:41:22.138339 2178 state_mem.go:36] "Initialized new in-memory state store" May 13 05:41:22.145400 kubelet[2178]: I0513 05:41:22.145089 2178 kubelet.go:408] "Attempting to sync node with API server" May 13 05:41:22.145400 kubelet[2178]: I0513 05:41:22.145115 2178 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 05:41:22.145400 kubelet[2178]: I0513 05:41:22.145141 2178 kubelet.go:314] "Adding apiserver pod source" May 13 05:41:22.145400 kubelet[2178]: I0513 05:41:22.145158 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 05:41:22.166405 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2197) May 13 05:41:22.167494 kubelet[2178]: W0513 05:41:22.167267 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-f146884e63.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.224:6443: connect: connection refused May 13 05:41:22.167494 kubelet[2178]: E0513 05:41:22.167348 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-f146884e63.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.224:6443: connect: connection refused" logger="UnhandledError" May 13 05:41:22.168242 kubelet[2178]: W0513 05:41:22.167770 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.224:6443: connect: connection refused May 13 05:41:22.168242 kubelet[2178]: E0513 05:41:22.167818 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.224:6443: connect: connection refused" logger="UnhandledError" May 13 05:41:22.168242 kubelet[2178]: I0513 05:41:22.167901 2178 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 05:41:22.174415 kubelet[2178]: I0513 05:41:22.174390 2178 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 05:41:22.181560 kubelet[2178]: W0513 05:41:22.181254 2178 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 05:41:22.181902 kubelet[2178]: I0513 05:41:22.181879 2178 server.go:1269] "Started kubelet" May 13 05:41:22.187212 kubelet[2178]: I0513 05:41:22.186627 2178 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 05:41:22.189867 kubelet[2178]: I0513 05:41:22.189841 2178 server.go:460] "Adding debug handlers to kubelet server" May 13 05:41:22.191512 kubelet[2178]: I0513 05:41:22.191000 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 05:41:22.204558 kubelet[2178]: I0513 05:41:22.204532 2178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 05:41:22.207712 kubelet[2178]: I0513 05:41:22.191470 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 05:41:22.207712 kubelet[2178]: I0513 05:41:22.206934 2178 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 05:41:22.207712 kubelet[2178]: I0513 05:41:22.207003 2178 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 05:41:22.207712 kubelet[2178]: E0513 05:41:22.207210 2178 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-f146884e63.novalocal\" not found" May 13 05:41:22.207712 kubelet[2178]: I0513 05:41:22.207268 2178 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 05:41:22.207712 kubelet[2178]: I0513 05:41:22.207307 2178 reconciler.go:26] "Reconciler: start to sync state" May 13 05:41:22.207712 kubelet[2178]: W0513 05:41:22.207572 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.224:6443: connect: connection refused May 13 05:41:22.207712 kubelet[2178]: E0513 05:41:22.207613 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.224:6443: connect: connection refused" logger="UnhandledError" May 13 05:41:22.209075 kubelet[2178]: E0513 05:41:22.208725 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-f146884e63.novalocal?timeout=10s\": dial tcp 172.24.4.224:6443: connect: connection refused" interval="200ms" May 13 05:41:22.211061 kubelet[2178]: I0513 05:41:22.211046 2178 factory.go:221] Registration of the systemd container factory successfully May 13 05:41:22.211227 kubelet[2178]: I0513 05:41:22.211187 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 05:41:22.213253 kubelet[2178]: E0513 05:41:22.198168 2178 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.224:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.224:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-f146884e63.novalocal.183effba8e3f7565 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-f146884e63.novalocal,UID:ci-4081-3-3-n-f146884e63.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-f146884e63.novalocal,},FirstTimestamp:2025-05-13 05:41:22.181854565 +0000 UTC m=+1.478058655,LastTimestamp:2025-05-13 05:41:22.181854565 +0000 UTC m=+1.478058655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-f146884e63.novalocal,}" May 13 05:41:22.217928 kubelet[2178]: I0513 05:41:22.217907 2178 factory.go:221] Registration of the containerd container factory successfully May 13 05:41:22.226252 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2197) May 13 05:41:22.228638 kubelet[2178]: I0513 05:41:22.228587 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 05:41:22.229611 kubelet[2178]: I0513 05:41:22.229587 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 05:41:22.229680 kubelet[2178]: I0513 05:41:22.229615 2178 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 05:41:22.229680 kubelet[2178]: I0513 05:41:22.229650 2178 kubelet.go:2321] "Starting kubelet main sync loop" May 13 05:41:22.229750 kubelet[2178]: E0513 05:41:22.229725 2178 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 05:41:22.246239 kubelet[2178]: W0513 05:41:22.245128 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.224:6443: connect: connection refused May 13 05:41:22.246239 kubelet[2178]: E0513 05:41:22.245181 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.224:6443: connect: connection refused" logger="UnhandledError" May 13 05:41:22.260554 kubelet[2178]: E0513 05:41:22.260521 2178 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 05:41:22.273119 kubelet[2178]: I0513 05:41:22.272870 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 05:41:22.273119 kubelet[2178]: I0513 05:41:22.272887 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 05:41:22.273119 kubelet[2178]: I0513 05:41:22.272901 2178 state_mem.go:36] "Initialized new in-memory state store" May 13 05:41:22.277166 kubelet[2178]: I0513 05:41:22.277034 2178 policy_none.go:49] "None policy: Start" May 13 05:41:22.277569 kubelet[2178]: I0513 05:41:22.277540 2178 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 05:41:22.277651 kubelet[2178]: I0513 05:41:22.277600 2178 state_mem.go:35] "Initializing new in-memory state store" May 13 05:41:22.284157 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 05:41:22.293326 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 05:41:22.297314 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 05:41:22.307487 kubelet[2178]: E0513 05:41:22.307438 2178 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-f146884e63.novalocal\" not found" May 13 05:41:22.308309 kubelet[2178]: I0513 05:41:22.307834 2178 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 05:41:22.308309 kubelet[2178]: I0513 05:41:22.307985 2178 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 05:41:22.308309 kubelet[2178]: I0513 05:41:22.307996 2178 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 05:41:22.308309 kubelet[2178]: I0513 05:41:22.308185 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 05:41:22.310492 kubelet[2178]: E0513 05:41:22.310192 2178 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-f146884e63.novalocal\" not found" May 13 05:41:22.346692 systemd[1]: Created slice kubepods-burstable-pod931e72e2565aa16a0d86822d70e3887d.slice - libcontainer container kubepods-burstable-pod931e72e2565aa16a0d86822d70e3887d.slice. May 13 05:41:22.383805 systemd[1]: Created slice kubepods-burstable-podaf79603f8f042887aaf1f05341a454c3.slice - libcontainer container kubepods-burstable-podaf79603f8f042887aaf1f05341a454c3.slice. May 13 05:41:22.400718 systemd[1]: Created slice kubepods-burstable-podf52f77821d58a86bc8d62989f7aecfc0.slice - libcontainer container kubepods-burstable-podf52f77821d58a86bc8d62989f7aecfc0.slice. May 13 05:41:22.408838 kubelet[2178]: I0513 05:41:22.408575 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/931e72e2565aa16a0d86822d70e3887d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"931e72e2565aa16a0d86822d70e3887d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.408838 kubelet[2178]: I0513 05:41:22.408607 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.408838 kubelet[2178]: I0513 05:41:22.408635 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.408838 kubelet[2178]: I0513 05:41:22.408652 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.409023 kubelet[2178]: I0513 05:41:22.408670 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.409023 kubelet[2178]: I0513 05:41:22.408689 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/931e72e2565aa16a0d86822d70e3887d-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"931e72e2565aa16a0d86822d70e3887d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.409023 kubelet[2178]: I0513 05:41:22.408705 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/931e72e2565aa16a0d86822d70e3887d-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"931e72e2565aa16a0d86822d70e3887d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.409023 kubelet[2178]: I0513 05:41:22.408721 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.409023 kubelet[2178]: I0513 05:41:22.408739 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f52f77821d58a86bc8d62989f7aecfc0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"f52f77821d58a86bc8d62989f7aecfc0\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.410314 kubelet[2178]: E0513 05:41:22.410268 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-f146884e63.novalocal?timeout=10s\": dial tcp 172.24.4.224:6443: connect: connection refused" interval="400ms" May 13 05:41:22.410915 kubelet[2178]: I0513 05:41:22.410889 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.411333 kubelet[2178]: E0513 05:41:22.411297 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.224:6443/api/v1/nodes\": dial tcp 172.24.4.224:6443: connect: connection refused" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.615370 kubelet[2178]: I0513 05:41:22.615162 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.616272 kubelet[2178]: E0513 05:41:22.615907 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.224:6443/api/v1/nodes\": dial tcp 172.24.4.224:6443: connect: connection refused" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:22.682654 containerd[1460]: time="2025-05-13T05:41:22.682512377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal,Uid:931e72e2565aa16a0d86822d70e3887d,Namespace:kube-system,Attempt:0,}" May 13 05:41:22.707022 containerd[1460]: time="2025-05-13T05:41:22.706191216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-f146884e63.novalocal,Uid:f52f77821d58a86bc8d62989f7aecfc0,Namespace:kube-system,Attempt:0,}" May 13 05:41:22.709406 containerd[1460]: time="2025-05-13T05:41:22.709293844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal,Uid:af79603f8f042887aaf1f05341a454c3,Namespace:kube-system,Attempt:0,}" May 13 05:41:22.811380 kubelet[2178]: E0513 05:41:22.811316 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-f146884e63.novalocal?timeout=10s\": dial tcp 172.24.4.224:6443: connect: connection refused" interval="800ms" May 13 05:41:23.018953 kubelet[2178]: I0513 05:41:23.018615 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:23.019178 kubelet[2178]: E0513 05:41:23.019105 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.224:6443/api/v1/nodes\": dial tcp 172.24.4.224:6443: connect: connection refused" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:23.142865 kubelet[2178]: W0513 05:41:23.142809 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.224:6443: connect: connection refused May 13 05:41:23.143311 kubelet[2178]: E0513 05:41:23.142894 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.224:6443: connect: connection refused" logger="UnhandledError" May 13 05:41:23.288480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306851463.mount: Deactivated successfully. May 13 05:41:23.298368 containerd[1460]: time="2025-05-13T05:41:23.297753043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 05:41:23.300038 containerd[1460]: time="2025-05-13T05:41:23.299875769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 05:41:23.301553 containerd[1460]: time="2025-05-13T05:41:23.301496207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 05:41:23.303266 containerd[1460]: time="2025-05-13T05:41:23.303101567Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 05:41:23.305394 containerd[1460]: time="2025-05-13T05:41:23.305342246Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 05:41:23.307338 containerd[1460]: time="2025-05-13T05:41:23.307153695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 05:41:23.308549 containerd[1460]: time="2025-05-13T05:41:23.308492291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 13 05:41:23.311527 containerd[1460]: time="2025-05-13T05:41:23.311414987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 05:41:23.313578 containerd[1460]: time="2025-05-13T05:41:23.313010758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 606.5536ms" May 13 05:41:23.319118 containerd[1460]: time="2025-05-13T05:41:23.319031101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.49585ms" May 13 05:41:23.319985 containerd[1460]: time="2025-05-13T05:41:23.319952901Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.303575ms" May 13 05:41:23.488533 kubelet[2178]: W0513 05:41:23.488415 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-f146884e63.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.224:6443: connect: connection refused May 13 05:41:23.488662 kubelet[2178]: E0513 05:41:23.488549 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-f146884e63.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.224:6443: connect: connection refused" logger="UnhandledError" May 13 05:41:23.501830 kubelet[2178]: W0513 05:41:23.501732 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.224:6443: connect: connection refused May 13 05:41:23.501909 kubelet[2178]: E0513 05:41:23.501840 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.224:6443: connect: connection refused" logger="UnhandledError" May 13 05:41:23.539505 containerd[1460]: time="2025-05-13T05:41:23.539301815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 05:41:23.539771 containerd[1460]: time="2025-05-13T05:41:23.539630165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 05:41:23.539771 containerd[1460]: time="2025-05-13T05:41:23.539700327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:23.540077 containerd[1460]: time="2025-05-13T05:41:23.540031012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:23.559567 kubelet[2178]: W0513 05:41:23.559443 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.224:6443: connect: connection refused May 13 05:41:23.559567 kubelet[2178]: E0513 05:41:23.559537 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.224:6443: connect: connection refused" logger="UnhandledError" May 13 05:41:23.560072 containerd[1460]: time="2025-05-13T05:41:23.558384170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 05:41:23.560072 containerd[1460]: time="2025-05-13T05:41:23.558518403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 05:41:23.560072 containerd[1460]: time="2025-05-13T05:41:23.558580331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:23.560072 containerd[1460]: time="2025-05-13T05:41:23.558775418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:23.564902 containerd[1460]: time="2025-05-13T05:41:23.563585907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 05:41:23.564902 containerd[1460]: time="2025-05-13T05:41:23.563724790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 05:41:23.564902 containerd[1460]: time="2025-05-13T05:41:23.563772780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:23.564902 containerd[1460]: time="2025-05-13T05:41:23.563949563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:23.597368 systemd[1]: Started cri-containerd-18dd0c5e1660667612f87fcdf187d90b0242cd743fed228d661f6a429783d872.scope - libcontainer container 18dd0c5e1660667612f87fcdf187d90b0242cd743fed228d661f6a429783d872. May 13 05:41:23.605911 systemd[1]: Started cri-containerd-4b4fb8a6e786d3d887c0a2410150c5fd1b4222bbb61a466c8ec68f1f77989c59.scope - libcontainer container 4b4fb8a6e786d3d887c0a2410150c5fd1b4222bbb61a466c8ec68f1f77989c59. May 13 05:41:23.609426 systemd[1]: Started cri-containerd-4debf97b067f4f73b0ab50a9d53113a491c9c75e7618c525fe44c12a97150eb9.scope - libcontainer container 4debf97b067f4f73b0ab50a9d53113a491c9c75e7618c525fe44c12a97150eb9. May 13 05:41:23.611783 kubelet[2178]: E0513 05:41:23.611747 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-f146884e63.novalocal?timeout=10s\": dial tcp 172.24.4.224:6443: connect: connection refused" interval="1.6s" May 13 05:41:23.681129 containerd[1460]: time="2025-05-13T05:41:23.680820607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal,Uid:931e72e2565aa16a0d86822d70e3887d,Namespace:kube-system,Attempt:0,} returns sandbox id \"18dd0c5e1660667612f87fcdf187d90b0242cd743fed228d661f6a429783d872\"" May 13 05:41:23.684097 containerd[1460]: time="2025-05-13T05:41:23.683963450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal,Uid:af79603f8f042887aaf1f05341a454c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b4fb8a6e786d3d887c0a2410150c5fd1b4222bbb61a466c8ec68f1f77989c59\"" May 13 05:41:23.685976 containerd[1460]: time="2025-05-13T05:41:23.685947704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-f146884e63.novalocal,Uid:f52f77821d58a86bc8d62989f7aecfc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4debf97b067f4f73b0ab50a9d53113a491c9c75e7618c525fe44c12a97150eb9\"" May 13 05:41:23.687855 containerd[1460]: time="2025-05-13T05:41:23.687730389Z" level=info msg="CreateContainer within sandbox \"18dd0c5e1660667612f87fcdf187d90b0242cd743fed228d661f6a429783d872\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 05:41:23.689046 containerd[1460]: time="2025-05-13T05:41:23.689017719Z" level=info msg="CreateContainer within sandbox \"4b4fb8a6e786d3d887c0a2410150c5fd1b4222bbb61a466c8ec68f1f77989c59\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 05:41:23.690253 containerd[1460]: time="2025-05-13T05:41:23.690220338Z" level=info msg="CreateContainer within sandbox \"4debf97b067f4f73b0ab50a9d53113a491c9c75e7618c525fe44c12a97150eb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 05:41:23.738172 containerd[1460]: time="2025-05-13T05:41:23.738086819Z" level=info msg="CreateContainer within sandbox \"4b4fb8a6e786d3d887c0a2410150c5fd1b4222bbb61a466c8ec68f1f77989c59\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1084673bb4bcf2dafc04647b954e57b591ca42f4c0df19f50e1eea3ccb157661\"" May 13 05:41:23.739479 containerd[1460]: time="2025-05-13T05:41:23.739268479Z" level=info msg="CreateContainer within sandbox \"4debf97b067f4f73b0ab50a9d53113a491c9c75e7618c525fe44c12a97150eb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0bc8af51b771fd64b7cede1787c392344e6572fff9b0b9189cb4135650023076\"" May 13 05:41:23.739479 containerd[1460]: time="2025-05-13T05:41:23.739413562Z" level=info msg="StartContainer for \"1084673bb4bcf2dafc04647b954e57b591ca42f4c0df19f50e1eea3ccb157661\"" May 13 05:41:23.741279 containerd[1460]: time="2025-05-13T05:41:23.739797668Z" level=info msg="CreateContainer within sandbox \"18dd0c5e1660667612f87fcdf187d90b0242cd743fed228d661f6a429783d872\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a8f148d8499c38df931c6ec413ffa58e07d9e02b1dbd66ed5660820e71a65d5\"" May 13 05:41:23.741279 containerd[1460]: time="2025-05-13T05:41:23.740064110Z" level=info msg="StartContainer for \"3a8f148d8499c38df931c6ec413ffa58e07d9e02b1dbd66ed5660820e71a65d5\"" May 13 05:41:23.743757 containerd[1460]: time="2025-05-13T05:41:23.743721312Z" level=info msg="StartContainer for \"0bc8af51b771fd64b7cede1787c392344e6572fff9b0b9189cb4135650023076\"" May 13 05:41:23.779380 systemd[1]: Started cri-containerd-1084673bb4bcf2dafc04647b954e57b591ca42f4c0df19f50e1eea3ccb157661.scope - libcontainer container 1084673bb4bcf2dafc04647b954e57b591ca42f4c0df19f50e1eea3ccb157661. May 13 05:41:23.783300 systemd[1]: Started cri-containerd-3a8f148d8499c38df931c6ec413ffa58e07d9e02b1dbd66ed5660820e71a65d5.scope - libcontainer container 3a8f148d8499c38df931c6ec413ffa58e07d9e02b1dbd66ed5660820e71a65d5. May 13 05:41:23.797412 systemd[1]: Started cri-containerd-0bc8af51b771fd64b7cede1787c392344e6572fff9b0b9189cb4135650023076.scope - libcontainer container 0bc8af51b771fd64b7cede1787c392344e6572fff9b0b9189cb4135650023076. May 13 05:41:23.823463 kubelet[2178]: I0513 05:41:23.823382 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:23.823733 kubelet[2178]: E0513 05:41:23.823704 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.224:6443/api/v1/nodes\": dial tcp 172.24.4.224:6443: connect: connection refused" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:23.855956 containerd[1460]: time="2025-05-13T05:41:23.855817236Z" level=info msg="StartContainer for \"1084673bb4bcf2dafc04647b954e57b591ca42f4c0df19f50e1eea3ccb157661\" returns successfully" May 13 05:41:23.865936 containerd[1460]: time="2025-05-13T05:41:23.865410492Z" level=info msg="StartContainer for \"3a8f148d8499c38df931c6ec413ffa58e07d9e02b1dbd66ed5660820e71a65d5\" returns successfully" May 13 05:41:23.886019 containerd[1460]: time="2025-05-13T05:41:23.885966287Z" level=info msg="StartContainer for \"0bc8af51b771fd64b7cede1787c392344e6572fff9b0b9189cb4135650023076\" returns successfully" May 13 05:41:25.425847 kubelet[2178]: I0513 05:41:25.425809 2178 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:26.257749 kubelet[2178]: E0513 05:41:26.257701 2178 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-f146884e63.novalocal\" not found" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:26.323379 kubelet[2178]: I0513 05:41:26.323336 2178 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:26.323379 kubelet[2178]: E0513 05:41:26.323380 2178 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-f146884e63.novalocal\": node \"ci-4081-3-3-n-f146884e63.novalocal\" not found" May 13 05:41:27.171706 kubelet[2178]: I0513 05:41:27.171633 2178 apiserver.go:52] "Watching apiserver" May 13 05:41:27.207682 kubelet[2178]: I0513 05:41:27.207545 2178 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 05:41:29.504497 systemd[1]: Reloading requested from client PID 2465 ('systemctl') (unit session-9.scope)... May 13 05:41:29.504530 systemd[1]: Reloading... May 13 05:41:29.640230 zram_generator::config[2503]: No configuration found. May 13 05:41:29.802229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 05:41:29.920566 systemd[1]: Reloading finished in 415 ms. May 13 05:41:29.973238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 05:41:29.992721 systemd[1]: kubelet.service: Deactivated successfully. May 13 05:41:29.993788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 05:41:29.993963 systemd[1]: kubelet.service: Consumed 1.221s CPU time, 119.3M memory peak, 0B memory swap peak. May 13 05:41:30.002990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 05:41:30.362630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 05:41:30.366436 (kubelet)[2568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 05:41:30.446538 kubelet[2568]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 05:41:30.446538 kubelet[2568]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 05:41:30.446538 kubelet[2568]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 05:41:30.446862 kubelet[2568]: I0513 05:41:30.446792 2568 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 05:41:30.458521 kubelet[2568]: I0513 05:41:30.458459 2568 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 05:41:30.458521 kubelet[2568]: I0513 05:41:30.458509 2568 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 05:41:30.459114 kubelet[2568]: I0513 05:41:30.459071 2568 server.go:929] "Client rotation is on, will bootstrap in background" May 13 05:41:30.462537 kubelet[2568]: I0513 05:41:30.462494 2568 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 05:41:30.467623 kubelet[2568]: I0513 05:41:30.467485 2568 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 05:41:30.471740 kubelet[2568]: E0513 05:41:30.471684 2568 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 05:41:30.472072 kubelet[2568]: I0513 05:41:30.471876 2568 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 05:41:30.474409 kubelet[2568]: I0513 05:41:30.474391 2568 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 05:41:30.474630 kubelet[2568]: I0513 05:41:30.474593 2568 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 05:41:30.475225 kubelet[2568]: I0513 05:41:30.474770 2568 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 05:41:30.475225 kubelet[2568]: I0513 05:41:30.474802 2568 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-f146884e63.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 05:41:30.475225 kubelet[2568]: I0513 05:41:30.475085 2568 topology_manager.go:138] "Creating topology manager with none policy" May 13 05:41:30.475225 kubelet[2568]: I0513 05:41:30.475095 2568 container_manager_linux.go:300] "Creating device plugin manager" May 13 05:41:30.475407 kubelet[2568]: I0513 05:41:30.475124 2568 state_mem.go:36] "Initialized new in-memory state store" May 13 05:41:30.475476 kubelet[2568]: I0513 05:41:30.475465 2568 kubelet.go:408] "Attempting to sync node with API server" May 13 05:41:30.475535 kubelet[2568]: I0513 05:41:30.475526 2568 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 05:41:30.475605 kubelet[2568]: I0513 05:41:30.475597 2568 kubelet.go:314] "Adding apiserver pod source" May 13 05:41:30.475667 kubelet[2568]: I0513 05:41:30.475658 2568 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 05:41:30.478604 kubelet[2568]: I0513 05:41:30.478577 2568 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 05:41:30.479373 kubelet[2568]: I0513 05:41:30.479068 2568 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 05:41:30.480606 kubelet[2568]: I0513 05:41:30.480041 2568 server.go:1269] "Started kubelet" May 13 05:41:30.486216 kubelet[2568]: I0513 05:41:30.484257 2568 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 05:41:30.486216 kubelet[2568]: I0513 05:41:30.485139 2568 server.go:460] "Adding debug handlers to kubelet server" May 13 05:41:30.486294 kubelet[2568]: I0513 05:41:30.486220 2568 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 05:41:30.486484 kubelet[2568]: I0513 05:41:30.486462 2568 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 05:41:30.492247 kubelet[2568]: I0513 05:41:30.490947 2568 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 05:41:30.497211 kubelet[2568]: I0513 05:41:30.496105 2568 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 05:41:30.497521 kubelet[2568]: I0513 05:41:30.497494 2568 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 05:41:30.497821 kubelet[2568]: E0513 05:41:30.497679 2568 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-f146884e63.novalocal\" not found" May 13 05:41:30.499118 kubelet[2568]: I0513 05:41:30.499099 2568 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 05:41:30.499251 kubelet[2568]: I0513 05:41:30.499236 2568 reconciler.go:26] "Reconciler: start to sync state" May 13 05:41:30.503722 kubelet[2568]: I0513 05:41:30.503485 2568 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 05:41:30.505502 kubelet[2568]: I0513 05:41:30.505480 2568 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 05:41:30.505584 kubelet[2568]: I0513 05:41:30.505562 2568 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 05:41:30.505643 kubelet[2568]: I0513 05:41:30.505591 2568 kubelet.go:2321] "Starting kubelet main sync loop" May 13 05:41:30.505674 kubelet[2568]: E0513 05:41:30.505632 2568 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 05:41:30.507984 kubelet[2568]: I0513 05:41:30.507967 2568 factory.go:221] Registration of the systemd container factory successfully May 13 05:41:30.508147 kubelet[2568]: I0513 05:41:30.508128 2568 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 05:41:30.509695 kubelet[2568]: I0513 05:41:30.509680 2568 factory.go:221] Registration of the containerd container factory successfully May 13 05:41:30.532084 kubelet[2568]: E0513 05:41:30.532054 2568 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 05:41:30.566977 kubelet[2568]: I0513 05:41:30.566948 2568 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 05:41:30.566977 kubelet[2568]: I0513 05:41:30.566966 2568 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 05:41:30.566977 kubelet[2568]: I0513 05:41:30.566983 2568 state_mem.go:36] "Initialized new in-memory state store" May 13 05:41:30.567759 kubelet[2568]: I0513 05:41:30.567120 2568 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 05:41:30.567759 kubelet[2568]: I0513 05:41:30.567142 2568 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 05:41:30.567759 kubelet[2568]: I0513 05:41:30.567169 2568 policy_none.go:49] "None policy: Start" May 13 05:41:30.568734 kubelet[2568]: I0513 05:41:30.568718 2568 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 05:41:30.568795 kubelet[2568]: I0513 05:41:30.568738 2568 state_mem.go:35] "Initializing new in-memory state store" May 13 05:41:30.569027 kubelet[2568]: I0513 05:41:30.568986 2568 state_mem.go:75] "Updated machine memory state" May 13 05:41:30.572919 kubelet[2568]: I0513 05:41:30.572898 2568 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 05:41:30.573390 kubelet[2568]: I0513 05:41:30.573034 2568 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 05:41:30.573390 kubelet[2568]: I0513 05:41:30.573045 2568 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 05:41:30.573390 kubelet[2568]: I0513 05:41:30.573269 2568 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 05:41:30.679387 kubelet[2568]: I0513 05:41:30.679135 2568 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.800373 kubelet[2568]: I0513 05:41:30.800250 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/931e72e2565aa16a0d86822d70e3887d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"931e72e2565aa16a0d86822d70e3887d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.800373 kubelet[2568]: I0513 05:41:30.800360 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.801076 kubelet[2568]: I0513 05:41:30.800419 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.801076 kubelet[2568]: I0513 05:41:30.800469 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.801076 kubelet[2568]: I0513 05:41:30.800549 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/931e72e2565aa16a0d86822d70e3887d-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"931e72e2565aa16a0d86822d70e3887d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.801076 kubelet[2568]: I0513 05:41:30.800593 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/931e72e2565aa16a0d86822d70e3887d-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"931e72e2565aa16a0d86822d70e3887d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.801076 kubelet[2568]: I0513 05:41:30.800637 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.801510 kubelet[2568]: I0513 05:41:30.800686 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af79603f8f042887aaf1f05341a454c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"af79603f8f042887aaf1f05341a454c3\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.801510 kubelet[2568]: I0513 05:41:30.800733 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f52f77821d58a86bc8d62989f7aecfc0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-f146884e63.novalocal\" (UID: \"f52f77821d58a86bc8d62989f7aecfc0\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.936195 kubelet[2568]: W0513 05:41:30.935994 2568 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 05:41:30.937597 kubelet[2568]: W0513 05:41:30.936851 2568 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 05:41:30.939134 kubelet[2568]: W0513 05:41:30.939100 2568 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 05:41:30.952154 kubelet[2568]: I0513 05:41:30.952075 2568 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:30.952653 kubelet[2568]: I0513 05:41:30.952256 2568 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-f146884e63.novalocal" May 13 05:41:31.099062 sudo[2600]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 05:41:31.099734 sudo[2600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 05:41:31.478709 kubelet[2568]: I0513 05:41:31.478664 2568 apiserver.go:52] "Watching apiserver" May 13 05:41:31.500135 kubelet[2568]: I0513 05:41:31.500081 2568 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 05:41:31.567125 kubelet[2568]: I0513 05:41:31.567052 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-f146884e63.novalocal" podStartSLOduration=1.5670296270000001 podStartE2EDuration="1.567029627s" podCreationTimestamp="2025-05-13 05:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 05:41:31.549354532 +0000 UTC m=+1.176630025" watchObservedRunningTime="2025-05-13 05:41:31.567029627 +0000 UTC m=+1.194305120" May 13 05:41:31.580505 kubelet[2568]: I0513 05:41:31.580454 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-f146884e63.novalocal" podStartSLOduration=1.580435033 podStartE2EDuration="1.580435033s" podCreationTimestamp="2025-05-13 05:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 05:41:31.567372072 +0000 UTC m=+1.194647565" watchObservedRunningTime="2025-05-13 05:41:31.580435033 +0000 UTC m=+1.207710526" May 13 05:41:31.596891 kubelet[2568]: I0513 05:41:31.596831 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-f146884e63.novalocal" podStartSLOduration=1.596814829 podStartE2EDuration="1.596814829s" podCreationTimestamp="2025-05-13 05:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 05:41:31.582522363 +0000 UTC m=+1.209797866" watchObservedRunningTime="2025-05-13 05:41:31.596814829 +0000 UTC m=+1.224090322" May 13 05:41:31.729319 sudo[2600]: pam_unix(sudo:session): session closed for user root May 13 05:41:34.149924 sudo[1691]: pam_unix(sudo:session): session closed for user root May 13 05:41:34.378675 sshd[1688]: pam_unix(sshd:session): session closed for user core May 13 05:41:34.385870 systemd[1]: sshd@6-172.24.4.224:22-172.24.4.1:43376.service: Deactivated successfully. May 13 05:41:34.389113 systemd[1]: session-9.scope: Deactivated successfully. May 13 05:41:34.389439 systemd[1]: session-9.scope: Consumed 7.859s CPU time, 157.3M memory peak, 0B memory swap peak. May 13 05:41:34.391621 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. May 13 05:41:34.394684 systemd-logind[1441]: Removed session 9. May 13 05:41:34.887272 kubelet[2568]: I0513 05:41:34.887126 2568 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 05:41:34.888141 kubelet[2568]: I0513 05:41:34.888089 2568 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 05:41:34.888313 containerd[1460]: time="2025-05-13T05:41:34.887746473Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 05:41:35.587500 systemd[1]: Created slice kubepods-besteffort-podffeee484_57b8_48bc_a816_bca75a48309f.slice - libcontainer container kubepods-besteffort-podffeee484_57b8_48bc_a816_bca75a48309f.slice. May 13 05:41:35.606988 systemd[1]: Created slice kubepods-burstable-pod83b69d26_ddc3_4213_a445_84588c734b1c.slice - libcontainer container kubepods-burstable-pod83b69d26_ddc3_4213_a445_84588c734b1c.slice. May 13 05:41:35.735024 kubelet[2568]: I0513 05:41:35.734251 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffeee484-57b8-48bc-a816-bca75a48309f-xtables-lock\") pod \"kube-proxy-gmk6q\" (UID: \"ffeee484-57b8-48bc-a816-bca75a48309f\") " pod="kube-system/kube-proxy-gmk6q" May 13 05:41:35.735024 kubelet[2568]: I0513 05:41:35.734312 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-lib-modules\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735024 kubelet[2568]: I0513 05:41:35.734343 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83b69d26-ddc3-4213-a445-84588c734b1c-clustermesh-secrets\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735024 kubelet[2568]: I0513 05:41:35.734375 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ffeee484-57b8-48bc-a816-bca75a48309f-kube-proxy\") pod \"kube-proxy-gmk6q\" (UID: \"ffeee484-57b8-48bc-a816-bca75a48309f\") " pod="kube-system/kube-proxy-gmk6q" May 13 05:41:35.735024 kubelet[2568]: I0513 05:41:35.734404 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-bpf-maps\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735024 kubelet[2568]: I0513 05:41:35.734432 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87x9\" (UniqueName: \"kubernetes.io/projected/ffeee484-57b8-48bc-a816-bca75a48309f-kube-api-access-w87x9\") pod \"kube-proxy-gmk6q\" (UID: \"ffeee484-57b8-48bc-a816-bca75a48309f\") " pod="kube-system/kube-proxy-gmk6q" May 13 05:41:35.735365 kubelet[2568]: I0513 05:41:35.734458 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cni-path\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735365 kubelet[2568]: I0513 05:41:35.734524 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-host-proc-sys-net\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735365 kubelet[2568]: I0513 05:41:35.734552 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-xtables-lock\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735365 kubelet[2568]: I0513 05:41:35.734580 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-etc-cni-netd\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735365 kubelet[2568]: I0513 05:41:35.734610 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-config-path\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735365 kubelet[2568]: I0513 05:41:35.734637 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twg8t\" (UniqueName: \"kubernetes.io/projected/83b69d26-ddc3-4213-a445-84588c734b1c-kube-api-access-twg8t\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735557 kubelet[2568]: I0513 05:41:35.734665 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffeee484-57b8-48bc-a816-bca75a48309f-lib-modules\") pod \"kube-proxy-gmk6q\" (UID: \"ffeee484-57b8-48bc-a816-bca75a48309f\") " pod="kube-system/kube-proxy-gmk6q" May 13 05:41:35.735557 kubelet[2568]: I0513 05:41:35.734690 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-host-proc-sys-kernel\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735557 kubelet[2568]: I0513 05:41:35.734720 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-hostproc\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735557 kubelet[2568]: I0513 05:41:35.734749 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83b69d26-ddc3-4213-a445-84588c734b1c-hubble-tls\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735557 kubelet[2568]: I0513 05:41:35.734774 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-run\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.735557 kubelet[2568]: I0513 05:41:35.734800 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-cgroup\") pod \"cilium-f2k9t\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " pod="kube-system/cilium-f2k9t" May 13 05:41:35.742956 systemd[1]: Created slice kubepods-besteffort-pod02f784bc_84a6_4680_82ab_ed710da4d9c9.slice - libcontainer container kubepods-besteffort-pod02f784bc_84a6_4680_82ab_ed710da4d9c9.slice. May 13 05:41:35.837278 kubelet[2568]: I0513 05:41:35.835855 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-955v2\" (UniqueName: \"kubernetes.io/projected/02f784bc-84a6-4680-82ab-ed710da4d9c9-kube-api-access-955v2\") pod \"cilium-operator-5d85765b45-z8sfw\" (UID: \"02f784bc-84a6-4680-82ab-ed710da4d9c9\") " pod="kube-system/cilium-operator-5d85765b45-z8sfw" May 13 05:41:35.837278 kubelet[2568]: I0513 05:41:35.836144 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02f784bc-84a6-4680-82ab-ed710da4d9c9-cilium-config-path\") pod \"cilium-operator-5d85765b45-z8sfw\" (UID: \"02f784bc-84a6-4680-82ab-ed710da4d9c9\") " pod="kube-system/cilium-operator-5d85765b45-z8sfw" May 13 05:41:35.903154 containerd[1460]: time="2025-05-13T05:41:35.903107352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmk6q,Uid:ffeee484-57b8-48bc-a816-bca75a48309f,Namespace:kube-system,Attempt:0,}" May 13 05:41:35.912913 containerd[1460]: time="2025-05-13T05:41:35.912798882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f2k9t,Uid:83b69d26-ddc3-4213-a445-84588c734b1c,Namespace:kube-system,Attempt:0,}" May 13 05:41:35.955267 containerd[1460]: time="2025-05-13T05:41:35.955122971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 05:41:35.955267 containerd[1460]: time="2025-05-13T05:41:35.955188584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 05:41:35.955480 containerd[1460]: time="2025-05-13T05:41:35.955235072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:35.955480 containerd[1460]: time="2025-05-13T05:41:35.955319190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:35.981495 containerd[1460]: time="2025-05-13T05:41:35.981087319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 05:41:35.981495 containerd[1460]: time="2025-05-13T05:41:35.981145438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 05:41:35.981495 containerd[1460]: time="2025-05-13T05:41:35.981165195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:35.981495 containerd[1460]: time="2025-05-13T05:41:35.981314155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:35.985475 systemd[1]: Started cri-containerd-ff5b2a3857151d47ca941a137b8fbd309458ca908a596d6e48eb5ae822c893d6.scope - libcontainer container ff5b2a3857151d47ca941a137b8fbd309458ca908a596d6e48eb5ae822c893d6. May 13 05:41:36.009540 systemd[1]: Started cri-containerd-6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c.scope - libcontainer container 6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c. May 13 05:41:36.039039 containerd[1460]: time="2025-05-13T05:41:36.038998193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmk6q,Uid:ffeee484-57b8-48bc-a816-bca75a48309f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff5b2a3857151d47ca941a137b8fbd309458ca908a596d6e48eb5ae822c893d6\"" May 13 05:41:36.044818 containerd[1460]: time="2025-05-13T05:41:36.044700969Z" level=info msg="CreateContainer within sandbox \"ff5b2a3857151d47ca941a137b8fbd309458ca908a596d6e48eb5ae822c893d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 05:41:36.052692 containerd[1460]: time="2025-05-13T05:41:36.052454649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-z8sfw,Uid:02f784bc-84a6-4680-82ab-ed710da4d9c9,Namespace:kube-system,Attempt:0,}" May 13 05:41:36.057326 containerd[1460]: time="2025-05-13T05:41:36.057258454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f2k9t,Uid:83b69d26-ddc3-4213-a445-84588c734b1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\"" May 13 05:41:36.062450 containerd[1460]: time="2025-05-13T05:41:36.062382710Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 05:41:36.098390 containerd[1460]: time="2025-05-13T05:41:36.098350122Z" level=info msg="CreateContainer within sandbox \"ff5b2a3857151d47ca941a137b8fbd309458ca908a596d6e48eb5ae822c893d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b79dcf8743a2e1c76481e05597ae1b340da65e240f399123063f23d222403c9\"" May 13 05:41:36.101106 containerd[1460]: time="2025-05-13T05:41:36.100063874Z" level=info msg="StartContainer for \"1b79dcf8743a2e1c76481e05597ae1b340da65e240f399123063f23d222403c9\"" May 13 05:41:36.110946 containerd[1460]: time="2025-05-13T05:41:36.110845531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 05:41:36.111802 containerd[1460]: time="2025-05-13T05:41:36.111687925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 05:41:36.111912 containerd[1460]: time="2025-05-13T05:41:36.111772554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:36.112186 containerd[1460]: time="2025-05-13T05:41:36.111945078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:36.136741 systemd[1]: Started cri-containerd-a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e.scope - libcontainer container a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e. May 13 05:41:36.143719 systemd[1]: Started cri-containerd-1b79dcf8743a2e1c76481e05597ae1b340da65e240f399123063f23d222403c9.scope - libcontainer container 1b79dcf8743a2e1c76481e05597ae1b340da65e240f399123063f23d222403c9. May 13 05:41:36.190076 containerd[1460]: time="2025-05-13T05:41:36.188631295Z" level=info msg="StartContainer for \"1b79dcf8743a2e1c76481e05597ae1b340da65e240f399123063f23d222403c9\" returns successfully" May 13 05:41:36.206232 containerd[1460]: time="2025-05-13T05:41:36.205300544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-z8sfw,Uid:02f784bc-84a6-4680-82ab-ed710da4d9c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\"" May 13 05:41:36.583730 kubelet[2568]: I0513 05:41:36.582779 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gmk6q" podStartSLOduration=1.582745923 podStartE2EDuration="1.582745923s" podCreationTimestamp="2025-05-13 05:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 05:41:36.582431732 +0000 UTC m=+6.209707235" watchObservedRunningTime="2025-05-13 05:41:36.582745923 +0000 UTC m=+6.210021466" May 13 05:41:42.392975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970822510.mount: Deactivated successfully. May 13 05:41:45.072268 containerd[1460]: time="2025-05-13T05:41:45.071268666Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:45.073268 containerd[1460]: time="2025-05-13T05:41:45.073219772Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 05:41:45.078434 containerd[1460]: time="2025-05-13T05:41:45.078290235Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:45.081527 containerd[1460]: time="2025-05-13T05:41:45.081478764Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.018797762s" May 13 05:41:45.081585 containerd[1460]: time="2025-05-13T05:41:45.081526273Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 05:41:45.083226 containerd[1460]: time="2025-05-13T05:41:45.083154232Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 05:41:45.085081 containerd[1460]: time="2025-05-13T05:41:45.084451779Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 05:41:45.104246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299734937.mount: Deactivated successfully. May 13 05:41:45.110236 containerd[1460]: time="2025-05-13T05:41:45.110171439Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\"" May 13 05:41:45.113321 containerd[1460]: time="2025-05-13T05:41:45.112247399Z" level=info msg="StartContainer for \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\"" May 13 05:41:45.150353 systemd[1]: Started cri-containerd-3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd.scope - libcontainer container 3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd. May 13 05:41:45.184427 containerd[1460]: time="2025-05-13T05:41:45.183802386Z" level=info msg="StartContainer for \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\" returns successfully" May 13 05:41:45.196971 systemd[1]: cri-containerd-3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd.scope: Deactivated successfully. May 13 05:41:46.101367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd-rootfs.mount: Deactivated successfully. May 13 05:41:46.348468 containerd[1460]: time="2025-05-13T05:41:46.348008603Z" level=info msg="shim disconnected" id=3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd namespace=k8s.io May 13 05:41:46.348468 containerd[1460]: time="2025-05-13T05:41:46.348108290Z" level=warning msg="cleaning up after shim disconnected" id=3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd namespace=k8s.io May 13 05:41:46.348468 containerd[1460]: time="2025-05-13T05:41:46.348132816Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:41:46.598931 containerd[1460]: time="2025-05-13T05:41:46.598839679Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 05:41:46.635082 containerd[1460]: time="2025-05-13T05:41:46.634976350Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\"" May 13 05:41:46.637560 containerd[1460]: time="2025-05-13T05:41:46.637504048Z" level=info msg="StartContainer for \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\"" May 13 05:41:46.696684 systemd[1]: Started cri-containerd-a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee.scope - libcontainer container a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee. May 13 05:41:46.733380 containerd[1460]: time="2025-05-13T05:41:46.733280513Z" level=info msg="StartContainer for \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\" returns successfully" May 13 05:41:46.748281 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 05:41:46.748624 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 05:41:46.748694 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 05:41:46.757720 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 05:41:46.757989 systemd[1]: cri-containerd-a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee.scope: Deactivated successfully. May 13 05:41:46.786509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee-rootfs.mount: Deactivated successfully. May 13 05:41:46.790302 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 05:41:46.800883 containerd[1460]: time="2025-05-13T05:41:46.800822881Z" level=info msg="shim disconnected" id=a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee namespace=k8s.io May 13 05:41:46.800883 containerd[1460]: time="2025-05-13T05:41:46.800878275Z" level=warning msg="cleaning up after shim disconnected" id=a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee namespace=k8s.io May 13 05:41:46.801030 containerd[1460]: time="2025-05-13T05:41:46.800890037Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:41:47.368807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492956678.mount: Deactivated successfully. May 13 05:41:47.607946 containerd[1460]: time="2025-05-13T05:41:47.607894376Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 05:41:47.650331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202122301.mount: Deactivated successfully. May 13 05:41:47.658423 containerd[1460]: time="2025-05-13T05:41:47.658364059Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\"" May 13 05:41:47.660732 containerd[1460]: time="2025-05-13T05:41:47.660697221Z" level=info msg="StartContainer for \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\"" May 13 05:41:47.702365 systemd[1]: Started cri-containerd-28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c.scope - libcontainer container 28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c. May 13 05:41:47.738771 systemd[1]: cri-containerd-28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c.scope: Deactivated successfully. May 13 05:41:47.742762 containerd[1460]: time="2025-05-13T05:41:47.742724278Z" level=info msg="StartContainer for \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\" returns successfully" May 13 05:41:47.858065 containerd[1460]: time="2025-05-13T05:41:47.857795854Z" level=info msg="shim disconnected" id=28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c namespace=k8s.io May 13 05:41:47.858065 containerd[1460]: time="2025-05-13T05:41:47.858005228Z" level=warning msg="cleaning up after shim disconnected" id=28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c namespace=k8s.io May 13 05:41:47.859513 containerd[1460]: time="2025-05-13T05:41:47.858604493Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:41:48.333479 containerd[1460]: time="2025-05-13T05:41:48.332861873Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:48.333479 containerd[1460]: time="2025-05-13T05:41:48.332965848Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 05:41:48.334570 containerd[1460]: time="2025-05-13T05:41:48.334526068Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 05:41:48.338525 containerd[1460]: time="2025-05-13T05:41:48.338494821Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.255307326s" May 13 05:41:48.338780 containerd[1460]: time="2025-05-13T05:41:48.338606781Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 05:41:48.342808 containerd[1460]: time="2025-05-13T05:41:48.342342997Z" level=info msg="CreateContainer within sandbox \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 05:41:48.364674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000747058.mount: Deactivated successfully. May 13 05:41:48.373575 containerd[1460]: time="2025-05-13T05:41:48.373510335Z" level=info msg="CreateContainer within sandbox \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\"" May 13 05:41:48.375862 containerd[1460]: time="2025-05-13T05:41:48.374345413Z" level=info msg="StartContainer for \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\"" May 13 05:41:48.404356 systemd[1]: Started cri-containerd-73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65.scope - libcontainer container 73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65. May 13 05:41:48.436291 containerd[1460]: time="2025-05-13T05:41:48.436111982Z" level=info msg="StartContainer for \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\" returns successfully" May 13 05:41:48.611401 containerd[1460]: time="2025-05-13T05:41:48.611290011Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 05:41:48.641674 containerd[1460]: time="2025-05-13T05:41:48.641505302Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\"" May 13 05:41:48.642731 containerd[1460]: time="2025-05-13T05:41:48.642698141Z" level=info msg="StartContainer for \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\"" May 13 05:41:48.684416 systemd[1]: Started cri-containerd-370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11.scope - libcontainer container 370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11. May 13 05:41:48.766086 systemd[1]: cri-containerd-370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11.scope: Deactivated successfully. May 13 05:41:48.769873 containerd[1460]: time="2025-05-13T05:41:48.768666432Z" level=info msg="StartContainer for \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\" returns successfully" May 13 05:41:49.067905 containerd[1460]: time="2025-05-13T05:41:49.067641435Z" level=info msg="shim disconnected" id=370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11 namespace=k8s.io May 13 05:41:49.067905 containerd[1460]: time="2025-05-13T05:41:49.067723048Z" level=warning msg="cleaning up after shim disconnected" id=370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11 namespace=k8s.io May 13 05:41:49.067905 containerd[1460]: time="2025-05-13T05:41:49.067736353Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:41:49.619909 containerd[1460]: time="2025-05-13T05:41:49.619771585Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 05:41:49.642718 kubelet[2568]: I0513 05:41:49.642545 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-z8sfw" podStartSLOduration=2.512147141 podStartE2EDuration="14.642526842s" podCreationTimestamp="2025-05-13 05:41:35 +0000 UTC" firstStartedPulling="2025-05-13 05:41:36.209114707 +0000 UTC m=+5.836390210" lastFinishedPulling="2025-05-13 05:41:48.339494357 +0000 UTC m=+17.966769911" observedRunningTime="2025-05-13 05:41:48.77943842 +0000 UTC m=+18.406713924" watchObservedRunningTime="2025-05-13 05:41:49.642526842 +0000 UTC m=+19.269802335" May 13 05:41:49.647636 containerd[1460]: time="2025-05-13T05:41:49.647553441Z" level=info msg="CreateContainer within sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\"" May 13 05:41:49.650253 containerd[1460]: time="2025-05-13T05:41:49.649399437Z" level=info msg="StartContainer for \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\"" May 13 05:41:49.686392 systemd[1]: Started cri-containerd-ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7.scope - libcontainer container ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7. May 13 05:41:49.724246 containerd[1460]: time="2025-05-13T05:41:49.724216271Z" level=info msg="StartContainer for \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\" returns successfully" May 13 05:41:49.885997 kubelet[2568]: I0513 05:41:49.885938 2568 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 05:41:49.938247 systemd[1]: Created slice kubepods-burstable-pode337bb61_a571_418f_babe_4a2ce32bbcca.slice - libcontainer container kubepods-burstable-pode337bb61_a571_418f_babe_4a2ce32bbcca.slice. May 13 05:41:49.947346 systemd[1]: Created slice kubepods-burstable-pod7fa255e0_26d4_41db_91dd_0b429cd2f44a.slice - libcontainer container kubepods-burstable-pod7fa255e0_26d4_41db_91dd_0b429cd2f44a.slice. May 13 05:41:50.048928 kubelet[2568]: I0513 05:41:50.048853 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e337bb61-a571-418f-babe-4a2ce32bbcca-config-volume\") pod \"coredns-6f6b679f8f-pjzgj\" (UID: \"e337bb61-a571-418f-babe-4a2ce32bbcca\") " pod="kube-system/coredns-6f6b679f8f-pjzgj" May 13 05:41:50.050038 kubelet[2568]: I0513 05:41:50.049804 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmbx\" (UniqueName: \"kubernetes.io/projected/7fa255e0-26d4-41db-91dd-0b429cd2f44a-kube-api-access-zmmbx\") pod \"coredns-6f6b679f8f-4rrkj\" (UID: \"7fa255e0-26d4-41db-91dd-0b429cd2f44a\") " pod="kube-system/coredns-6f6b679f8f-4rrkj" May 13 05:41:50.050038 kubelet[2568]: I0513 05:41:50.049868 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fa255e0-26d4-41db-91dd-0b429cd2f44a-config-volume\") pod \"coredns-6f6b679f8f-4rrkj\" (UID: \"7fa255e0-26d4-41db-91dd-0b429cd2f44a\") " pod="kube-system/coredns-6f6b679f8f-4rrkj" May 13 05:41:50.050038 kubelet[2568]: I0513 05:41:50.049893 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcx9g\" (UniqueName: \"kubernetes.io/projected/e337bb61-a571-418f-babe-4a2ce32bbcca-kube-api-access-xcx9g\") pod \"coredns-6f6b679f8f-pjzgj\" (UID: \"e337bb61-a571-418f-babe-4a2ce32bbcca\") " pod="kube-system/coredns-6f6b679f8f-pjzgj" May 13 05:41:50.244157 containerd[1460]: time="2025-05-13T05:41:50.244054734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pjzgj,Uid:e337bb61-a571-418f-babe-4a2ce32bbcca,Namespace:kube-system,Attempt:0,}" May 13 05:41:50.252128 containerd[1460]: time="2025-05-13T05:41:50.252016372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4rrkj,Uid:7fa255e0-26d4-41db-91dd-0b429cd2f44a,Namespace:kube-system,Attempt:0,}" May 13 05:41:51.969858 systemd-networkd[1363]: cilium_host: Link UP May 13 05:41:51.970615 systemd-networkd[1363]: cilium_net: Link UP May 13 05:41:51.972559 systemd-networkd[1363]: cilium_net: Gained carrier May 13 05:41:51.973187 systemd-networkd[1363]: cilium_host: Gained carrier May 13 05:41:51.973357 systemd-networkd[1363]: cilium_net: Gained IPv6LL May 13 05:41:51.973523 systemd-networkd[1363]: cilium_host: Gained IPv6LL May 13 05:41:52.077154 systemd-networkd[1363]: cilium_vxlan: Link UP May 13 05:41:52.077161 systemd-networkd[1363]: cilium_vxlan: Gained carrier May 13 05:41:52.410245 kernel: NET: Registered PF_ALG protocol family May 13 05:41:53.251231 systemd-networkd[1363]: lxc_health: Link UP May 13 05:41:53.258934 systemd-networkd[1363]: lxc_health: Gained carrier May 13 05:41:53.486410 systemd-networkd[1363]: cilium_vxlan: Gained IPv6LL May 13 05:41:53.856188 kernel: eth0: renamed from tmpbb0d9 May 13 05:41:53.863991 systemd-networkd[1363]: lxcae8698201833: Link UP May 13 05:41:53.885737 systemd-networkd[1363]: lxcae8698201833: Gained carrier May 13 05:41:53.899520 systemd-networkd[1363]: lxccd650530415c: Link UP May 13 05:41:53.909309 kernel: eth0: renamed from tmp8e441 May 13 05:41:53.919908 systemd-networkd[1363]: lxccd650530415c: Gained carrier May 13 05:41:53.975147 kubelet[2568]: I0513 05:41:53.974810 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f2k9t" podStartSLOduration=9.952664563999999 podStartE2EDuration="18.974657974s" podCreationTimestamp="2025-05-13 05:41:35 +0000 UTC" firstStartedPulling="2025-05-13 05:41:36.060592233 +0000 UTC m=+5.687867726" lastFinishedPulling="2025-05-13 05:41:45.082585643 +0000 UTC m=+14.709861136" observedRunningTime="2025-05-13 05:41:50.652865855 +0000 UTC m=+20.280141359" watchObservedRunningTime="2025-05-13 05:41:53.974657974 +0000 UTC m=+23.601933477" May 13 05:41:54.574397 systemd-networkd[1363]: lxc_health: Gained IPv6LL May 13 05:41:55.152503 systemd-networkd[1363]: lxccd650530415c: Gained IPv6LL May 13 05:41:55.854498 systemd-networkd[1363]: lxcae8698201833: Gained IPv6LL May 13 05:41:58.624366 containerd[1460]: time="2025-05-13T05:41:58.623781456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 05:41:58.624366 containerd[1460]: time="2025-05-13T05:41:58.623842500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 05:41:58.627312 containerd[1460]: time="2025-05-13T05:41:58.626955543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:58.627312 containerd[1460]: time="2025-05-13T05:41:58.627294268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:58.687416 systemd[1]: Started cri-containerd-bb0d947cbbe1be3ad622e12fd7394735461b2e5f22743dbb6f7f5b4108c2d01c.scope - libcontainer container bb0d947cbbe1be3ad622e12fd7394735461b2e5f22743dbb6f7f5b4108c2d01c. May 13 05:41:58.728721 containerd[1460]: time="2025-05-13T05:41:58.728277206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 05:41:58.728721 containerd[1460]: time="2025-05-13T05:41:58.728341827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 05:41:58.728721 containerd[1460]: time="2025-05-13T05:41:58.728368477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:58.729341 containerd[1460]: time="2025-05-13T05:41:58.729137000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:41:58.768375 systemd[1]: Started cri-containerd-8e4414cc8b93445db11d139a04c097e4aa2ab99deb9ded8c6f976a2e2886bc07.scope - libcontainer container 8e4414cc8b93445db11d139a04c097e4aa2ab99deb9ded8c6f976a2e2886bc07. May 13 05:41:58.808867 containerd[1460]: time="2025-05-13T05:41:58.808823981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4rrkj,Uid:7fa255e0-26d4-41db-91dd-0b429cd2f44a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb0d947cbbe1be3ad622e12fd7394735461b2e5f22743dbb6f7f5b4108c2d01c\"" May 13 05:41:58.813215 containerd[1460]: time="2025-05-13T05:41:58.813094446Z" level=info msg="CreateContainer within sandbox \"bb0d947cbbe1be3ad622e12fd7394735461b2e5f22743dbb6f7f5b4108c2d01c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 05:41:58.832875 containerd[1460]: time="2025-05-13T05:41:58.832821589Z" level=info msg="CreateContainer within sandbox \"bb0d947cbbe1be3ad622e12fd7394735461b2e5f22743dbb6f7f5b4108c2d01c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a633421091a30b9d2c552fb20ee35d2e482481939ce147acc23730b2cd2a507\"" May 13 05:41:58.833902 containerd[1460]: time="2025-05-13T05:41:58.833555486Z" level=info msg="StartContainer for \"4a633421091a30b9d2c552fb20ee35d2e482481939ce147acc23730b2cd2a507\"" May 13 05:41:58.871620 systemd[1]: Started cri-containerd-4a633421091a30b9d2c552fb20ee35d2e482481939ce147acc23730b2cd2a507.scope - libcontainer container 4a633421091a30b9d2c552fb20ee35d2e482481939ce147acc23730b2cd2a507. May 13 05:41:58.893382 containerd[1460]: time="2025-05-13T05:41:58.892567777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pjzgj,Uid:e337bb61-a571-418f-babe-4a2ce32bbcca,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e4414cc8b93445db11d139a04c097e4aa2ab99deb9ded8c6f976a2e2886bc07\"" May 13 05:41:58.897029 containerd[1460]: time="2025-05-13T05:41:58.896625231Z" level=info msg="CreateContainer within sandbox \"8e4414cc8b93445db11d139a04c097e4aa2ab99deb9ded8c6f976a2e2886bc07\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 05:41:58.926725 containerd[1460]: time="2025-05-13T05:41:58.926657614Z" level=info msg="CreateContainer within sandbox \"8e4414cc8b93445db11d139a04c097e4aa2ab99deb9ded8c6f976a2e2886bc07\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96bd3fb5d9dbb43aba4bfcb91f51d4783805a23a810e20087e7f054b67207025\"" May 13 05:41:58.926957 containerd[1460]: time="2025-05-13T05:41:58.926671340Z" level=info msg="StartContainer for \"4a633421091a30b9d2c552fb20ee35d2e482481939ce147acc23730b2cd2a507\" returns successfully" May 13 05:41:58.929812 containerd[1460]: time="2025-05-13T05:41:58.927565858Z" level=info msg="StartContainer for \"96bd3fb5d9dbb43aba4bfcb91f51d4783805a23a810e20087e7f054b67207025\"" May 13 05:41:58.970650 systemd[1]: Started cri-containerd-96bd3fb5d9dbb43aba4bfcb91f51d4783805a23a810e20087e7f054b67207025.scope - libcontainer container 96bd3fb5d9dbb43aba4bfcb91f51d4783805a23a810e20087e7f054b67207025. May 13 05:41:59.071591 containerd[1460]: time="2025-05-13T05:41:59.071511150Z" level=info msg="StartContainer for \"96bd3fb5d9dbb43aba4bfcb91f51d4783805a23a810e20087e7f054b67207025\" returns successfully" May 13 05:41:59.693022 kubelet[2568]: I0513 05:41:59.692709 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4rrkj" podStartSLOduration=24.692676214 podStartE2EDuration="24.692676214s" podCreationTimestamp="2025-05-13 05:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 05:41:59.690742125 +0000 UTC m=+29.318017668" watchObservedRunningTime="2025-05-13 05:41:59.692676214 +0000 UTC m=+29.319951757" May 13 05:41:59.748163 kubelet[2568]: I0513 05:41:59.748000 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pjzgj" podStartSLOduration=24.747972507 podStartE2EDuration="24.747972507s" podCreationTimestamp="2025-05-13 05:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 05:41:59.747394011 +0000 UTC m=+29.374669564" watchObservedRunningTime="2025-05-13 05:41:59.747972507 +0000 UTC m=+29.375248060" May 13 05:46:23.657317 systemd[1]: Started sshd@7-172.24.4.224:22-172.24.4.1:32886.service - OpenSSH per-connection server daemon (172.24.4.1:32886). May 13 05:46:24.992091 sshd[3967]: Accepted publickey for core from 172.24.4.1 port 32886 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:46:24.997777 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:46:25.019853 systemd-logind[1441]: New session 10 of user core. May 13 05:46:25.029699 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 05:46:26.128933 sshd[3967]: pam_unix(sshd:session): session closed for user core May 13 05:46:26.138747 systemd[1]: sshd@7-172.24.4.224:22-172.24.4.1:32886.service: Deactivated successfully. May 13 05:46:26.147497 systemd[1]: session-10.scope: Deactivated successfully. May 13 05:46:26.150785 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. May 13 05:46:26.155163 systemd-logind[1441]: Removed session 10. May 13 05:46:31.160116 systemd[1]: Started sshd@8-172.24.4.224:22-172.24.4.1:32888.service - OpenSSH per-connection server daemon (172.24.4.1:32888). May 13 05:46:32.554405 sshd[3987]: Accepted publickey for core from 172.24.4.1 port 32888 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:46:32.559016 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:46:32.572365 systemd-logind[1441]: New session 11 of user core. May 13 05:46:32.581659 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 05:46:33.390909 sshd[3987]: pam_unix(sshd:session): session closed for user core May 13 05:46:33.396553 systemd[1]: sshd@8-172.24.4.224:22-172.24.4.1:32888.service: Deactivated successfully. May 13 05:46:33.402400 systemd[1]: session-11.scope: Deactivated successfully. May 13 05:46:33.407582 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. May 13 05:46:33.410800 systemd-logind[1441]: Removed session 11. May 13 05:46:38.431955 systemd[1]: Started sshd@9-172.24.4.224:22-172.24.4.1:50088.service - OpenSSH per-connection server daemon (172.24.4.1:50088). May 13 05:46:39.669233 sshd[4003]: Accepted publickey for core from 172.24.4.1 port 50088 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:46:39.671823 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:46:39.680847 systemd-logind[1441]: New session 12 of user core. May 13 05:46:39.686493 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 05:46:40.438880 sshd[4003]: pam_unix(sshd:session): session closed for user core May 13 05:46:40.446783 systemd[1]: sshd@9-172.24.4.224:22-172.24.4.1:50088.service: Deactivated successfully. May 13 05:46:40.453608 systemd[1]: session-12.scope: Deactivated successfully. May 13 05:46:40.459140 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. May 13 05:46:40.462286 systemd-logind[1441]: Removed session 12. May 13 05:46:45.480027 systemd[1]: Started sshd@10-172.24.4.224:22-172.24.4.1:49032.service - OpenSSH per-connection server daemon (172.24.4.1:49032). May 13 05:46:46.746040 sshd[4017]: Accepted publickey for core from 172.24.4.1 port 49032 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:46:46.750935 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:46:46.768437 systemd-logind[1441]: New session 13 of user core. May 13 05:46:46.777639 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 05:46:47.667842 sshd[4017]: pam_unix(sshd:session): session closed for user core May 13 05:46:47.687267 systemd[1]: sshd@10-172.24.4.224:22-172.24.4.1:49032.service: Deactivated successfully. May 13 05:46:47.696526 systemd[1]: session-13.scope: Deactivated successfully. May 13 05:46:47.704383 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. May 13 05:46:47.712384 systemd[1]: Started sshd@11-172.24.4.224:22-172.24.4.1:49034.service - OpenSSH per-connection server daemon (172.24.4.1:49034). May 13 05:46:47.720470 systemd-logind[1441]: Removed session 13. May 13 05:46:49.026999 sshd[4031]: Accepted publickey for core from 172.24.4.1 port 49034 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:46:49.032989 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:46:49.051872 systemd-logind[1441]: New session 14 of user core. May 13 05:46:49.058570 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 05:46:49.843357 sshd[4031]: pam_unix(sshd:session): session closed for user core May 13 05:46:49.867398 systemd[1]: sshd@11-172.24.4.224:22-172.24.4.1:49034.service: Deactivated successfully. May 13 05:46:49.874816 systemd[1]: session-14.scope: Deactivated successfully. May 13 05:46:49.882427 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. May 13 05:46:49.892476 systemd[1]: Started sshd@12-172.24.4.224:22-172.24.4.1:49042.service - OpenSSH per-connection server daemon (172.24.4.1:49042). May 13 05:46:49.904708 systemd-logind[1441]: Removed session 14. May 13 05:46:51.016551 sshd[4041]: Accepted publickey for core from 172.24.4.1 port 49042 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:46:51.019953 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:46:51.031345 systemd-logind[1441]: New session 15 of user core. May 13 05:46:51.039606 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 05:46:51.910717 sshd[4041]: pam_unix(sshd:session): session closed for user core May 13 05:46:51.919497 systemd[1]: sshd@12-172.24.4.224:22-172.24.4.1:49042.service: Deactivated successfully. May 13 05:46:51.925786 systemd[1]: session-15.scope: Deactivated successfully. May 13 05:46:51.928519 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. May 13 05:46:51.931939 systemd-logind[1441]: Removed session 15. May 13 05:46:56.934087 systemd[1]: Started sshd@13-172.24.4.224:22-172.24.4.1:57202.service - OpenSSH per-connection server daemon (172.24.4.1:57202). May 13 05:46:58.132422 sshd[4054]: Accepted publickey for core from 172.24.4.1 port 57202 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:46:58.136456 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:46:58.152831 systemd-logind[1441]: New session 16 of user core. May 13 05:46:58.157626 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 05:46:58.962626 sshd[4054]: pam_unix(sshd:session): session closed for user core May 13 05:46:58.991564 systemd[1]: sshd@13-172.24.4.224:22-172.24.4.1:57202.service: Deactivated successfully. May 13 05:46:58.998046 systemd[1]: session-16.scope: Deactivated successfully. May 13 05:46:59.002699 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. May 13 05:46:59.011975 systemd[1]: Started sshd@14-172.24.4.224:22-172.24.4.1:57216.service - OpenSSH per-connection server daemon (172.24.4.1:57216). May 13 05:46:59.016313 systemd-logind[1441]: Removed session 16. May 13 05:47:00.336336 sshd[4069]: Accepted publickey for core from 172.24.4.1 port 57216 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:00.340062 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:00.352754 systemd-logind[1441]: New session 17 of user core. May 13 05:47:00.362587 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 05:47:01.188108 sshd[4069]: pam_unix(sshd:session): session closed for user core May 13 05:47:01.211804 systemd[1]: sshd@14-172.24.4.224:22-172.24.4.1:57216.service: Deactivated successfully. May 13 05:47:01.217684 systemd[1]: session-17.scope: Deactivated successfully. May 13 05:47:01.219544 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. May 13 05:47:01.228952 systemd[1]: Started sshd@15-172.24.4.224:22-172.24.4.1:57228.service - OpenSSH per-connection server daemon (172.24.4.1:57228). May 13 05:47:01.230479 systemd-logind[1441]: Removed session 17. May 13 05:47:02.393055 sshd[4081]: Accepted publickey for core from 172.24.4.1 port 57228 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:02.396598 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:02.409813 systemd-logind[1441]: New session 18 of user core. May 13 05:47:02.420341 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 05:47:05.597626 sshd[4081]: pam_unix(sshd:session): session closed for user core May 13 05:47:05.623070 systemd[1]: sshd@15-172.24.4.224:22-172.24.4.1:57228.service: Deactivated successfully. May 13 05:47:05.628504 systemd[1]: session-18.scope: Deactivated successfully. May 13 05:47:05.628991 systemd[1]: session-18.scope: Consumed 1.012s CPU time. May 13 05:47:05.632675 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. May 13 05:47:05.649675 systemd[1]: Started sshd@16-172.24.4.224:22-172.24.4.1:53956.service - OpenSSH per-connection server daemon (172.24.4.1:53956). May 13 05:47:05.655006 systemd-logind[1441]: Removed session 18. May 13 05:47:06.776286 sshd[4100]: Accepted publickey for core from 172.24.4.1 port 53956 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:06.779448 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:06.795326 systemd-logind[1441]: New session 19 of user core. May 13 05:47:06.805034 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 05:47:07.922579 sshd[4100]: pam_unix(sshd:session): session closed for user core May 13 05:47:07.943739 systemd[1]: sshd@16-172.24.4.224:22-172.24.4.1:53956.service: Deactivated successfully. May 13 05:47:07.949422 systemd[1]: session-19.scope: Deactivated successfully. May 13 05:47:07.952995 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. May 13 05:47:07.960845 systemd[1]: Started sshd@17-172.24.4.224:22-172.24.4.1:53964.service - OpenSSH per-connection server daemon (172.24.4.1:53964). May 13 05:47:07.969455 systemd-logind[1441]: Removed session 19. May 13 05:47:09.196641 sshd[4112]: Accepted publickey for core from 172.24.4.1 port 53964 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:09.202580 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:09.219504 systemd-logind[1441]: New session 20 of user core. May 13 05:47:09.232768 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 05:47:10.043780 sshd[4112]: pam_unix(sshd:session): session closed for user core May 13 05:47:10.059065 systemd[1]: sshd@17-172.24.4.224:22-172.24.4.1:53964.service: Deactivated successfully. May 13 05:47:10.066907 systemd[1]: session-20.scope: Deactivated successfully. May 13 05:47:10.070875 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. May 13 05:47:10.073586 systemd-logind[1441]: Removed session 20. May 13 05:47:15.080304 systemd[1]: Started sshd@18-172.24.4.224:22-172.24.4.1:50274.service - OpenSSH per-connection server daemon (172.24.4.1:50274). May 13 05:47:16.198305 sshd[4127]: Accepted publickey for core from 172.24.4.1 port 50274 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:16.200388 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:16.213344 systemd-logind[1441]: New session 21 of user core. May 13 05:47:16.224690 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 05:47:17.019153 sshd[4127]: pam_unix(sshd:session): session closed for user core May 13 05:47:17.027467 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. May 13 05:47:17.029831 systemd[1]: sshd@18-172.24.4.224:22-172.24.4.1:50274.service: Deactivated successfully. May 13 05:47:17.035668 systemd[1]: session-21.scope: Deactivated successfully. May 13 05:47:17.042552 systemd-logind[1441]: Removed session 21. May 13 05:47:22.049988 systemd[1]: Started sshd@19-172.24.4.224:22-172.24.4.1:50286.service - OpenSSH per-connection server daemon (172.24.4.1:50286). May 13 05:47:23.402407 sshd[4139]: Accepted publickey for core from 172.24.4.1 port 50286 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:23.406314 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:23.420116 systemd-logind[1441]: New session 22 of user core. May 13 05:47:23.428525 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 05:47:24.190252 sshd[4139]: pam_unix(sshd:session): session closed for user core May 13 05:47:24.203604 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. May 13 05:47:24.204870 systemd[1]: sshd@19-172.24.4.224:22-172.24.4.1:50286.service: Deactivated successfully. May 13 05:47:24.213795 systemd[1]: session-22.scope: Deactivated successfully. May 13 05:47:24.221051 systemd-logind[1441]: Removed session 22. May 13 05:47:29.231182 systemd[1]: Started sshd@20-172.24.4.224:22-172.24.4.1:35602.service - OpenSSH per-connection server daemon (172.24.4.1:35602). May 13 05:47:30.770841 sshd[4152]: Accepted publickey for core from 172.24.4.1 port 35602 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:30.775841 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:30.791315 systemd-logind[1441]: New session 23 of user core. May 13 05:47:30.801588 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 05:47:31.594191 sshd[4152]: pam_unix(sshd:session): session closed for user core May 13 05:47:31.607148 systemd[1]: sshd@20-172.24.4.224:22-172.24.4.1:35602.service: Deactivated successfully. May 13 05:47:31.613524 systemd[1]: session-23.scope: Deactivated successfully. May 13 05:47:31.618193 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. May 13 05:47:31.633935 systemd[1]: Started sshd@21-172.24.4.224:22-172.24.4.1:35606.service - OpenSSH per-connection server daemon (172.24.4.1:35606). May 13 05:47:31.638067 systemd-logind[1441]: Removed session 23. May 13 05:47:32.951060 sshd[4167]: Accepted publickey for core from 172.24.4.1 port 35606 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:32.955642 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:32.968538 systemd-logind[1441]: New session 24 of user core. May 13 05:47:32.974641 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 05:47:35.511822 containerd[1460]: time="2025-05-13T05:47:35.511544738Z" level=info msg="StopContainer for \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\" with timeout 30 (s)" May 13 05:47:35.515426 containerd[1460]: time="2025-05-13T05:47:35.513356341Z" level=info msg="Stop container \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\" with signal terminated" May 13 05:47:35.547759 systemd[1]: cri-containerd-73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65.scope: Deactivated successfully. May 13 05:47:35.548171 systemd[1]: cri-containerd-73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65.scope: Consumed 1.342s CPU time. May 13 05:47:35.577574 containerd[1460]: time="2025-05-13T05:47:35.577003095Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 05:47:35.589407 containerd[1460]: time="2025-05-13T05:47:35.589191553Z" level=info msg="StopContainer for \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\" with timeout 2 (s)" May 13 05:47:35.589973 containerd[1460]: time="2025-05-13T05:47:35.589908188Z" level=info msg="Stop container \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\" with signal terminated" May 13 05:47:35.604929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65-rootfs.mount: Deactivated successfully. May 13 05:47:35.618890 systemd-networkd[1363]: lxc_health: Link DOWN May 13 05:47:35.619579 systemd-networkd[1363]: lxc_health: Lost carrier May 13 05:47:35.627980 containerd[1460]: time="2025-05-13T05:47:35.627836158Z" level=info msg="shim disconnected" id=73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65 namespace=k8s.io May 13 05:47:35.628280 containerd[1460]: time="2025-05-13T05:47:35.628185805Z" level=warning msg="cleaning up after shim disconnected" id=73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65 namespace=k8s.io May 13 05:47:35.628410 containerd[1460]: time="2025-05-13T05:47:35.628382905Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:47:35.639656 systemd[1]: cri-containerd-ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7.scope: Deactivated successfully. May 13 05:47:35.640577 systemd[1]: cri-containerd-ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7.scope: Consumed 11.423s CPU time. May 13 05:47:35.692255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7-rootfs.mount: Deactivated successfully. May 13 05:47:35.701426 containerd[1460]: time="2025-05-13T05:47:35.701234013Z" level=warning msg="cleanup warnings time=\"2025-05-13T05:47:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 05:47:35.706281 containerd[1460]: time="2025-05-13T05:47:35.705928749Z" level=info msg="shim disconnected" id=ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7 namespace=k8s.io May 13 05:47:35.706281 containerd[1460]: time="2025-05-13T05:47:35.706130427Z" level=warning msg="cleaning up after shim disconnected" id=ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7 namespace=k8s.io May 13 05:47:35.706281 containerd[1460]: time="2025-05-13T05:47:35.706146036Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:47:35.710489 containerd[1460]: time="2025-05-13T05:47:35.710246797Z" level=info msg="StopContainer for \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\" returns successfully" May 13 05:47:35.712161 containerd[1460]: time="2025-05-13T05:47:35.712127248Z" level=info msg="StopPodSandbox for \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\"" May 13 05:47:35.712567 containerd[1460]: time="2025-05-13T05:47:35.712354926Z" level=info msg="Container to stop \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 05:47:35.715495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e-shm.mount: Deactivated successfully. May 13 05:47:35.726661 systemd[1]: cri-containerd-a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e.scope: Deactivated successfully. May 13 05:47:35.748247 kubelet[2568]: E0513 05:47:35.747428 2568 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 05:47:35.754035 containerd[1460]: time="2025-05-13T05:47:35.753864471Z" level=info msg="StopContainer for \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\" returns successfully" May 13 05:47:35.758233 containerd[1460]: time="2025-05-13T05:47:35.754950170Z" level=info msg="StopPodSandbox for \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\"" May 13 05:47:35.758233 containerd[1460]: time="2025-05-13T05:47:35.755018328Z" level=info msg="Container to stop \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 05:47:35.758233 containerd[1460]: time="2025-05-13T05:47:35.755036282Z" level=info msg="Container to stop \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 05:47:35.758233 containerd[1460]: time="2025-05-13T05:47:35.755048134Z" level=info msg="Container to stop \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 05:47:35.758233 containerd[1460]: time="2025-05-13T05:47:35.755066859Z" level=info msg="Container to stop \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 05:47:35.758233 containerd[1460]: time="2025-05-13T05:47:35.755078711Z" level=info msg="Container to stop \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 05:47:35.760072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c-shm.mount: Deactivated successfully. May 13 05:47:35.774398 systemd[1]: cri-containerd-6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c.scope: Deactivated successfully. May 13 05:47:35.800034 containerd[1460]: time="2025-05-13T05:47:35.799735427Z" level=info msg="shim disconnected" id=a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e namespace=k8s.io May 13 05:47:35.800034 containerd[1460]: time="2025-05-13T05:47:35.799833130Z" level=warning msg="cleaning up after shim disconnected" id=a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e namespace=k8s.io May 13 05:47:35.800034 containerd[1460]: time="2025-05-13T05:47:35.799855832Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:47:35.803677 containerd[1460]: time="2025-05-13T05:47:35.803390129Z" level=info msg="shim disconnected" id=6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c namespace=k8s.io May 13 05:47:35.803677 containerd[1460]: time="2025-05-13T05:47:35.803471022Z" level=warning msg="cleaning up after shim disconnected" id=6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c namespace=k8s.io May 13 05:47:35.803677 containerd[1460]: time="2025-05-13T05:47:35.803483265Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:47:35.830453 containerd[1460]: time="2025-05-13T05:47:35.830392444Z" level=warning msg="cleanup warnings time=\"2025-05-13T05:47:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 05:47:35.832169 containerd[1460]: time="2025-05-13T05:47:35.831996636Z" level=info msg="TearDown network for sandbox \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\" successfully" May 13 05:47:35.832169 containerd[1460]: time="2025-05-13T05:47:35.832022144Z" level=info msg="StopPodSandbox for \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\" returns successfully" May 13 05:47:35.839636 containerd[1460]: time="2025-05-13T05:47:35.839475500Z" level=info msg="TearDown network for sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" successfully" May 13 05:47:35.839636 containerd[1460]: time="2025-05-13T05:47:35.839509303Z" level=info msg="StopPodSandbox for \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" returns successfully" May 13 05:47:35.972771 kubelet[2568]: I0513 05:47:35.972524 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twg8t\" (UniqueName: \"kubernetes.io/projected/83b69d26-ddc3-4213-a445-84588c734b1c-kube-api-access-twg8t\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.972771 kubelet[2568]: I0513 05:47:35.972632 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83b69d26-ddc3-4213-a445-84588c734b1c-clustermesh-secrets\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.972771 kubelet[2568]: I0513 05:47:35.972697 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-host-proc-sys-kernel\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.972771 kubelet[2568]: I0513 05:47:35.972750 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-955v2\" (UniqueName: \"kubernetes.io/projected/02f784bc-84a6-4680-82ab-ed710da4d9c9-kube-api-access-955v2\") pod \"02f784bc-84a6-4680-82ab-ed710da4d9c9\" (UID: \"02f784bc-84a6-4680-82ab-ed710da4d9c9\") " May 13 05:47:35.973422 kubelet[2568]: I0513 05:47:35.972808 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-run\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973422 kubelet[2568]: I0513 05:47:35.972866 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-bpf-maps\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973422 kubelet[2568]: I0513 05:47:35.972902 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-xtables-lock\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973422 kubelet[2568]: I0513 05:47:35.972955 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83b69d26-ddc3-4213-a445-84588c734b1c-hubble-tls\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973422 kubelet[2568]: I0513 05:47:35.973020 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-config-path\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973422 kubelet[2568]: I0513 05:47:35.973154 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-host-proc-sys-net\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973975 kubelet[2568]: I0513 05:47:35.973316 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-etc-cni-netd\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973975 kubelet[2568]: I0513 05:47:35.973400 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02f784bc-84a6-4680-82ab-ed710da4d9c9-cilium-config-path\") pod \"02f784bc-84a6-4680-82ab-ed710da4d9c9\" (UID: \"02f784bc-84a6-4680-82ab-ed710da4d9c9\") " May 13 05:47:35.973975 kubelet[2568]: I0513 05:47:35.973464 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-lib-modules\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973975 kubelet[2568]: I0513 05:47:35.973521 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cni-path\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973975 kubelet[2568]: I0513 05:47:35.973568 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-hostproc\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.973975 kubelet[2568]: I0513 05:47:35.973618 2568 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-cgroup\") pod \"83b69d26-ddc3-4213-a445-84588c734b1c\" (UID: \"83b69d26-ddc3-4213-a445-84588c734b1c\") " May 13 05:47:35.975674 kubelet[2568]: I0513 05:47:35.973912 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.981306 kubelet[2568]: I0513 05:47:35.978355 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.981306 kubelet[2568]: I0513 05:47:35.978467 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.981306 kubelet[2568]: I0513 05:47:35.981004 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.983669 kubelet[2568]: I0513 05:47:35.982447 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.983669 kubelet[2568]: I0513 05:47:35.982537 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cni-path" (OuterVolumeSpecName: "cni-path") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.983669 kubelet[2568]: I0513 05:47:35.982578 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-hostproc" (OuterVolumeSpecName: "hostproc") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.986599 kubelet[2568]: I0513 05:47:35.986491 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.986789 kubelet[2568]: I0513 05:47:35.986631 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.986789 kubelet[2568]: I0513 05:47:35.986711 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 05:47:35.994543 kubelet[2568]: I0513 05:47:35.994451 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b69d26-ddc3-4213-a445-84588c734b1c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 05:47:35.997932 kubelet[2568]: I0513 05:47:35.997841 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f784bc-84a6-4680-82ab-ed710da4d9c9-kube-api-access-955v2" (OuterVolumeSpecName: "kube-api-access-955v2") pod "02f784bc-84a6-4680-82ab-ed710da4d9c9" (UID: "02f784bc-84a6-4680-82ab-ed710da4d9c9"). InnerVolumeSpecName "kube-api-access-955v2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 05:47:36.004535 kubelet[2568]: I0513 05:47:36.004411 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b69d26-ddc3-4213-a445-84588c734b1c-kube-api-access-twg8t" (OuterVolumeSpecName: "kube-api-access-twg8t") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "kube-api-access-twg8t". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 05:47:36.007449 kubelet[2568]: I0513 05:47:36.006429 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b69d26-ddc3-4213-a445-84588c734b1c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 05:47:36.008040 kubelet[2568]: I0513 05:47:36.007968 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "83b69d26-ddc3-4213-a445-84588c734b1c" (UID: "83b69d26-ddc3-4213-a445-84588c734b1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 05:47:36.011771 kubelet[2568]: I0513 05:47:36.011687 2568 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f784bc-84a6-4680-82ab-ed710da4d9c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "02f784bc-84a6-4680-82ab-ed710da4d9c9" (UID: "02f784bc-84a6-4680-82ab-ed710da4d9c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 05:47:36.075141 kubelet[2568]: I0513 05:47:36.074889 2568 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-config-path\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075141 kubelet[2568]: I0513 05:47:36.074967 2568 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-lib-modules\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075141 kubelet[2568]: I0513 05:47:36.074996 2568 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-host-proc-sys-net\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075141 kubelet[2568]: I0513 05:47:36.075036 2568 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-etc-cni-netd\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075141 kubelet[2568]: I0513 05:47:36.075065 2568 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02f784bc-84a6-4680-82ab-ed710da4d9c9-cilium-config-path\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075141 kubelet[2568]: I0513 05:47:36.075089 2568 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-cgroup\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075141 kubelet[2568]: I0513 05:47:36.075111 2568 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cni-path\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075880 kubelet[2568]: I0513 05:47:36.075193 2568 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-hostproc\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075880 kubelet[2568]: I0513 05:47:36.075258 2568 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83b69d26-ddc3-4213-a445-84588c734b1c-clustermesh-secrets\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075880 kubelet[2568]: I0513 05:47:36.075323 2568 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-twg8t\" (UniqueName: \"kubernetes.io/projected/83b69d26-ddc3-4213-a445-84588c734b1c-kube-api-access-twg8t\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075880 kubelet[2568]: I0513 05:47:36.075350 2568 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-host-proc-sys-kernel\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075880 kubelet[2568]: I0513 05:47:36.075392 2568 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-955v2\" (UniqueName: \"kubernetes.io/projected/02f784bc-84a6-4680-82ab-ed710da4d9c9-kube-api-access-955v2\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075880 kubelet[2568]: I0513 05:47:36.075417 2568 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-bpf-maps\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.075880 kubelet[2568]: I0513 05:47:36.075441 2568 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-xtables-lock\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.076849 kubelet[2568]: I0513 05:47:36.075477 2568 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83b69d26-ddc3-4213-a445-84588c734b1c-hubble-tls\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.076849 kubelet[2568]: I0513 05:47:36.075501 2568 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83b69d26-ddc3-4213-a445-84588c734b1c-cilium-run\") on node \"ci-4081-3-3-n-f146884e63.novalocal\" DevicePath \"\"" May 13 05:47:36.188762 kubelet[2568]: I0513 05:47:36.187298 2568 scope.go:117] "RemoveContainer" containerID="ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7" May 13 05:47:36.199260 containerd[1460]: time="2025-05-13T05:47:36.197184986Z" level=info msg="RemoveContainer for \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\"" May 13 05:47:36.212064 systemd[1]: Removed slice kubepods-burstable-pod83b69d26_ddc3_4213_a445_84588c734b1c.slice - libcontainer container kubepods-burstable-pod83b69d26_ddc3_4213_a445_84588c734b1c.slice. May 13 05:47:36.212459 systemd[1]: kubepods-burstable-pod83b69d26_ddc3_4213_a445_84588c734b1c.slice: Consumed 11.525s CPU time. May 13 05:47:36.220781 systemd[1]: Removed slice kubepods-besteffort-pod02f784bc_84a6_4680_82ab_ed710da4d9c9.slice - libcontainer container kubepods-besteffort-pod02f784bc_84a6_4680_82ab_ed710da4d9c9.slice. May 13 05:47:36.221114 systemd[1]: kubepods-besteffort-pod02f784bc_84a6_4680_82ab_ed710da4d9c9.slice: Consumed 1.370s CPU time. May 13 05:47:36.269279 containerd[1460]: time="2025-05-13T05:47:36.269091511Z" level=info msg="RemoveContainer for \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\" returns successfully" May 13 05:47:36.271873 kubelet[2568]: I0513 05:47:36.271816 2568 scope.go:117] "RemoveContainer" containerID="370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11" May 13 05:47:36.278185 containerd[1460]: time="2025-05-13T05:47:36.277939686Z" level=info msg="RemoveContainer for \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\"" May 13 05:47:36.289614 containerd[1460]: time="2025-05-13T05:47:36.289249051Z" level=info msg="RemoveContainer for \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\" returns successfully" May 13 05:47:36.290963 kubelet[2568]: I0513 05:47:36.289804 2568 scope.go:117] "RemoveContainer" containerID="28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c" May 13 05:47:36.294128 containerd[1460]: time="2025-05-13T05:47:36.294003509Z" level=info msg="RemoveContainer for \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\"" May 13 05:47:36.305721 containerd[1460]: time="2025-05-13T05:47:36.305606317Z" level=info msg="RemoveContainer for \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\" returns successfully" May 13 05:47:36.306246 kubelet[2568]: I0513 05:47:36.306067 2568 scope.go:117] "RemoveContainer" containerID="a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee" May 13 05:47:36.308880 containerd[1460]: time="2025-05-13T05:47:36.308704154Z" level=info msg="RemoveContainer for \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\"" May 13 05:47:36.314591 containerd[1460]: time="2025-05-13T05:47:36.313454544Z" level=info msg="RemoveContainer for \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\" returns successfully" May 13 05:47:36.314959 kubelet[2568]: I0513 05:47:36.313818 2568 scope.go:117] "RemoveContainer" containerID="3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd" May 13 05:47:36.316287 containerd[1460]: time="2025-05-13T05:47:36.316102156Z" level=info msg="RemoveContainer for \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\"" May 13 05:47:36.322413 containerd[1460]: time="2025-05-13T05:47:36.322344468Z" level=info msg="RemoveContainer for \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\" returns successfully" May 13 05:47:36.322808 kubelet[2568]: I0513 05:47:36.322783 2568 scope.go:117] "RemoveContainer" containerID="ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7" May 13 05:47:36.325805 containerd[1460]: time="2025-05-13T05:47:36.323610525Z" level=error msg="ContainerStatus for \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\": not found" May 13 05:47:36.325916 kubelet[2568]: E0513 05:47:36.324329 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\": not found" containerID="ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7" May 13 05:47:36.325916 kubelet[2568]: I0513 05:47:36.324371 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7"} err="failed to get container status \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea18c1fadb9829b4b91ace314862948cc12cd5e32edd436a81196c4ba6bae6e7\": not found" May 13 05:47:36.325916 kubelet[2568]: I0513 05:47:36.324517 2568 scope.go:117] "RemoveContainer" containerID="370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11" May 13 05:47:36.326674 containerd[1460]: time="2025-05-13T05:47:36.326572837Z" level=error msg="ContainerStatus for \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\": not found" May 13 05:47:36.326960 kubelet[2568]: E0513 05:47:36.326807 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\": not found" containerID="370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11" May 13 05:47:36.326960 kubelet[2568]: I0513 05:47:36.326853 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11"} err="failed to get container status \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\": rpc error: code = NotFound desc = an error occurred when try to find container \"370c6a34de998bb7b95368dd010a218e5f5e74b654b232fd0a97c8d9d029fa11\": not found" May 13 05:47:36.326960 kubelet[2568]: I0513 05:47:36.326883 2568 scope.go:117] "RemoveContainer" containerID="28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c" May 13 05:47:36.329463 containerd[1460]: time="2025-05-13T05:47:36.329279479Z" level=error msg="ContainerStatus for \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\": not found" May 13 05:47:36.330026 kubelet[2568]: E0513 05:47:36.329863 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\": not found" containerID="28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c" May 13 05:47:36.330026 kubelet[2568]: I0513 05:47:36.329917 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c"} err="failed to get container status \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"28f5dca97c91913d0a1a1b844859e2b764155fdaf07b9fbe9890fe4854b74a4c\": not found" May 13 05:47:36.330026 kubelet[2568]: I0513 05:47:36.329947 2568 scope.go:117] "RemoveContainer" containerID="a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee" May 13 05:47:36.330805 containerd[1460]: time="2025-05-13T05:47:36.330624506Z" level=error msg="ContainerStatus for \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\": not found" May 13 05:47:36.331241 kubelet[2568]: E0513 05:47:36.330989 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\": not found" containerID="a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee" May 13 05:47:36.331417 kubelet[2568]: I0513 05:47:36.331171 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee"} err="failed to get container status \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"a48b59360524fff754e2690a0b477af74ef545fcb1fbb79769a137bc31b271ee\": not found" May 13 05:47:36.331417 kubelet[2568]: I0513 05:47:36.331334 2568 scope.go:117] "RemoveContainer" containerID="3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd" May 13 05:47:36.331671 containerd[1460]: time="2025-05-13T05:47:36.331580180Z" level=error msg="ContainerStatus for \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\": not found" May 13 05:47:36.331972 kubelet[2568]: E0513 05:47:36.331811 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\": not found" containerID="3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd" May 13 05:47:36.331972 kubelet[2568]: I0513 05:47:36.331844 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd"} err="failed to get container status \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b74ec0d4ed9208a409f63503d680b3edc6cd5326d6562b6b7c13724af7100bd\": not found" May 13 05:47:36.331972 kubelet[2568]: I0513 05:47:36.331869 2568 scope.go:117] "RemoveContainer" containerID="73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65" May 13 05:47:36.333325 containerd[1460]: time="2025-05-13T05:47:36.333269983Z" level=info msg="RemoveContainer for \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\"" May 13 05:47:36.337570 containerd[1460]: time="2025-05-13T05:47:36.337428111Z" level=info msg="RemoveContainer for \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\" returns successfully" May 13 05:47:36.337754 kubelet[2568]: I0513 05:47:36.337726 2568 scope.go:117] "RemoveContainer" containerID="73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65" May 13 05:47:36.338068 containerd[1460]: time="2025-05-13T05:47:36.338004944Z" level=error msg="ContainerStatus for \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\": not found" May 13 05:47:36.338284 kubelet[2568]: E0513 05:47:36.338252 2568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\": not found" containerID="73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65" May 13 05:47:36.338387 kubelet[2568]: I0513 05:47:36.338286 2568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65"} err="failed to get container status \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\": rpc error: code = NotFound desc = an error occurred when try to find container \"73792df8ddd748df39b31831bb6f2c72dbd2b7fff68331b04d06e973be35ec65\": not found" May 13 05:47:36.513834 kubelet[2568]: I0513 05:47:36.513704 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f784bc-84a6-4680-82ab-ed710da4d9c9" path="/var/lib/kubelet/pods/02f784bc-84a6-4680-82ab-ed710da4d9c9/volumes" May 13 05:47:36.515918 kubelet[2568]: I0513 05:47:36.515802 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b69d26-ddc3-4213-a445-84588c734b1c" path="/var/lib/kubelet/pods/83b69d26-ddc3-4213-a445-84588c734b1c/volumes" May 13 05:47:36.535846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e-rootfs.mount: Deactivated successfully. May 13 05:47:36.536109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c-rootfs.mount: Deactivated successfully. May 13 05:47:36.536352 systemd[1]: var-lib-kubelet-pods-02f784bc\x2d84a6\x2d4680\x2d82ab\x2ded710da4d9c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d955v2.mount: Deactivated successfully. May 13 05:47:36.536532 systemd[1]: var-lib-kubelet-pods-83b69d26\x2dddc3\x2d4213\x2da445\x2d84588c734b1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtwg8t.mount: Deactivated successfully. May 13 05:47:36.536719 systemd[1]: var-lib-kubelet-pods-83b69d26\x2dddc3\x2d4213\x2da445\x2d84588c734b1c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 05:47:36.536896 systemd[1]: var-lib-kubelet-pods-83b69d26\x2dddc3\x2d4213\x2da445\x2d84588c734b1c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 05:47:37.519784 sshd[4167]: pam_unix(sshd:session): session closed for user core May 13 05:47:37.541402 systemd[1]: Started sshd@22-172.24.4.224:22-172.24.4.1:56240.service - OpenSSH per-connection server daemon (172.24.4.1:56240). May 13 05:47:37.549897 systemd[1]: sshd@21-172.24.4.224:22-172.24.4.1:35606.service: Deactivated successfully. May 13 05:47:37.572637 systemd[1]: session-24.scope: Deactivated successfully. May 13 05:47:37.573788 systemd[1]: session-24.scope: Consumed 1.486s CPU time. May 13 05:47:37.581246 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. May 13 05:47:37.587666 systemd-logind[1441]: Removed session 24. May 13 05:47:38.583292 kubelet[2568]: I0513 05:47:38.579745 2568 setters.go:600] "Node became not ready" node="ci-4081-3-3-n-f146884e63.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T05:47:38Z","lastTransitionTime":"2025-05-13T05:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 05:47:39.011303 sshd[4329]: Accepted publickey for core from 172.24.4.1 port 56240 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:39.016821 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:39.035151 systemd-logind[1441]: New session 25 of user core. May 13 05:47:39.041722 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 05:47:40.752217 kubelet[2568]: E0513 05:47:40.751974 2568 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 05:47:40.901036 kubelet[2568]: E0513 05:47:40.900921 2568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83b69d26-ddc3-4213-a445-84588c734b1c" containerName="mount-cgroup" May 13 05:47:40.901036 kubelet[2568]: E0513 05:47:40.901008 2568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83b69d26-ddc3-4213-a445-84588c734b1c" containerName="apply-sysctl-overwrites" May 13 05:47:40.901036 kubelet[2568]: E0513 05:47:40.901018 2568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83b69d26-ddc3-4213-a445-84588c734b1c" containerName="mount-bpf-fs" May 13 05:47:40.901036 kubelet[2568]: E0513 05:47:40.901030 2568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02f784bc-84a6-4680-82ab-ed710da4d9c9" containerName="cilium-operator" May 13 05:47:40.902499 kubelet[2568]: E0513 05:47:40.902005 2568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83b69d26-ddc3-4213-a445-84588c734b1c" containerName="clean-cilium-state" May 13 05:47:40.902499 kubelet[2568]: E0513 05:47:40.902059 2568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83b69d26-ddc3-4213-a445-84588c734b1c" containerName="cilium-agent" May 13 05:47:40.902499 kubelet[2568]: I0513 05:47:40.902166 2568 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b69d26-ddc3-4213-a445-84588c734b1c" containerName="cilium-agent" May 13 05:47:40.902499 kubelet[2568]: I0513 05:47:40.902181 2568 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f784bc-84a6-4680-82ab-ed710da4d9c9" containerName="cilium-operator" May 13 05:47:40.919122 kubelet[2568]: W0513 05:47:40.918911 2568 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-3-n-f146884e63.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-n-f146884e63.novalocal' and this object May 13 05:47:40.919122 kubelet[2568]: E0513 05:47:40.919056 2568 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-3-n-f146884e63.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-f146884e63.novalocal' and this object" logger="UnhandledError" May 13 05:47:40.923260 kubelet[2568]: W0513 05:47:40.922707 2568 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081-3-3-n-f146884e63.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-n-f146884e63.novalocal' and this object May 13 05:47:40.923260 kubelet[2568]: E0513 05:47:40.922774 2568 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4081-3-3-n-f146884e63.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-f146884e63.novalocal' and this object" logger="UnhandledError" May 13 05:47:40.923000 systemd[1]: Created slice kubepods-burstable-pod92ac8f51_1401_4e02_883e_74603505bf2c.slice - libcontainer container kubepods-burstable-pod92ac8f51_1401_4e02_883e_74603505bf2c.slice. May 13 05:47:40.967698 sshd[4329]: pam_unix(sshd:session): session closed for user core May 13 05:47:40.983363 systemd[1]: sshd@22-172.24.4.224:22-172.24.4.1:56240.service: Deactivated successfully. May 13 05:47:40.987912 systemd[1]: session-25.scope: Deactivated successfully. May 13 05:47:40.988402 systemd[1]: session-25.scope: Consumed 1.337s CPU time. May 13 05:47:40.992675 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. May 13 05:47:41.001728 systemd[1]: Started sshd@23-172.24.4.224:22-172.24.4.1:56256.service - OpenSSH per-connection server daemon (172.24.4.1:56256). May 13 05:47:41.007579 systemd-logind[1441]: Removed session 25. May 13 05:47:41.016869 kubelet[2568]: I0513 05:47:41.016806 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92ac8f51-1401-4e02-883e-74603505bf2c-cilium-config-path\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.016992 kubelet[2568]: I0513 05:47:41.016886 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-bpf-maps\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.016992 kubelet[2568]: I0513 05:47:41.016916 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-hostproc\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.016992 kubelet[2568]: I0513 05:47:41.016937 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-xtables-lock\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.016992 kubelet[2568]: I0513 05:47:41.016956 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-host-proc-sys-net\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017230 kubelet[2568]: I0513 05:47:41.016994 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-host-proc-sys-kernel\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017230 kubelet[2568]: I0513 05:47:41.017016 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92ac8f51-1401-4e02-883e-74603505bf2c-hubble-tls\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017230 kubelet[2568]: I0513 05:47:41.017036 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5gn\" (UniqueName: \"kubernetes.io/projected/92ac8f51-1401-4e02-883e-74603505bf2c-kube-api-access-xg5gn\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017230 kubelet[2568]: I0513 05:47:41.017081 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-etc-cni-netd\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017230 kubelet[2568]: I0513 05:47:41.017101 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-lib-modules\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017230 kubelet[2568]: I0513 05:47:41.017121 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-cilium-cgroup\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017546 kubelet[2568]: I0513 05:47:41.017153 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-cni-path\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017546 kubelet[2568]: I0513 05:47:41.017185 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92ac8f51-1401-4e02-883e-74603505bf2c-clustermesh-secrets\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017546 kubelet[2568]: I0513 05:47:41.017238 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92ac8f51-1401-4e02-883e-74603505bf2c-cilium-ipsec-secrets\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:41.017546 kubelet[2568]: I0513 05:47:41.017270 2568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92ac8f51-1401-4e02-883e-74603505bf2c-cilium-run\") pod \"cilium-zvkgs\" (UID: \"92ac8f51-1401-4e02-883e-74603505bf2c\") " pod="kube-system/cilium-zvkgs" May 13 05:47:42.121129 kubelet[2568]: E0513 05:47:42.120951 2568 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 13 05:47:42.121129 kubelet[2568]: E0513 05:47:42.121120 2568 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-zvkgs: failed to sync secret cache: timed out waiting for the condition May 13 05:47:42.122768 kubelet[2568]: E0513 05:47:42.121512 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92ac8f51-1401-4e02-883e-74603505bf2c-hubble-tls podName:92ac8f51-1401-4e02-883e-74603505bf2c nodeName:}" failed. No retries permitted until 2025-05-13 05:47:42.621395158 +0000 UTC m=+372.248670701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/92ac8f51-1401-4e02-883e-74603505bf2c-hubble-tls") pod "cilium-zvkgs" (UID: "92ac8f51-1401-4e02-883e-74603505bf2c") : failed to sync secret cache: timed out waiting for the condition May 13 05:47:42.122768 kubelet[2568]: E0513 05:47:42.121593 2568 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 13 05:47:42.122768 kubelet[2568]: E0513 05:47:42.121743 2568 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92ac8f51-1401-4e02-883e-74603505bf2c-cilium-ipsec-secrets podName:92ac8f51-1401-4e02-883e-74603505bf2c nodeName:}" failed. No retries permitted until 2025-05-13 05:47:42.621713987 +0000 UTC m=+372.248989530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/92ac8f51-1401-4e02-883e-74603505bf2c-cilium-ipsec-secrets") pod "cilium-zvkgs" (UID: "92ac8f51-1401-4e02-883e-74603505bf2c") : failed to sync secret cache: timed out waiting for the condition May 13 05:47:42.133138 sshd[4343]: Accepted publickey for core from 172.24.4.1 port 56256 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:42.149848 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:42.163508 systemd-logind[1441]: New session 26 of user core. May 13 05:47:42.179636 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 05:47:42.731554 containerd[1460]: time="2025-05-13T05:47:42.731287266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvkgs,Uid:92ac8f51-1401-4e02-883e-74603505bf2c,Namespace:kube-system,Attempt:0,}" May 13 05:47:42.793579 sshd[4343]: pam_unix(sshd:session): session closed for user core May 13 05:47:42.809500 systemd[1]: sshd@23-172.24.4.224:22-172.24.4.1:56256.service: Deactivated successfully. May 13 05:47:42.814046 systemd[1]: session-26.scope: Deactivated successfully. May 13 05:47:42.821479 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. May 13 05:47:42.831297 systemd[1]: Started sshd@24-172.24.4.224:22-172.24.4.1:56260.service - OpenSSH per-connection server daemon (172.24.4.1:56260). May 13 05:47:42.834587 systemd-logind[1441]: Removed session 26. May 13 05:47:42.840623 containerd[1460]: time="2025-05-13T05:47:42.840439173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 05:47:42.843282 containerd[1460]: time="2025-05-13T05:47:42.840583053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 05:47:42.843282 containerd[1460]: time="2025-05-13T05:47:42.841989163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:47:42.843282 containerd[1460]: time="2025-05-13T05:47:42.842118477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 05:47:42.874523 systemd[1]: Started cri-containerd-45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f.scope - libcontainer container 45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f. May 13 05:47:42.905901 containerd[1460]: time="2025-05-13T05:47:42.905626336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvkgs,Uid:92ac8f51-1401-4e02-883e-74603505bf2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\"" May 13 05:47:42.912593 containerd[1460]: time="2025-05-13T05:47:42.912540800Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 05:47:42.936642 containerd[1460]: time="2025-05-13T05:47:42.936458170Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8a4b5d506a7feee672724ac6ab61cf568e849d16a87fcec36b0d94b7c7f86ac\"" May 13 05:47:42.940460 containerd[1460]: time="2025-05-13T05:47:42.939124146Z" level=info msg="StartContainer for \"e8a4b5d506a7feee672724ac6ab61cf568e849d16a87fcec36b0d94b7c7f86ac\"" May 13 05:47:42.972478 systemd[1]: Started cri-containerd-e8a4b5d506a7feee672724ac6ab61cf568e849d16a87fcec36b0d94b7c7f86ac.scope - libcontainer container e8a4b5d506a7feee672724ac6ab61cf568e849d16a87fcec36b0d94b7c7f86ac. May 13 05:47:43.008367 containerd[1460]: time="2025-05-13T05:47:43.008178721Z" level=info msg="StartContainer for \"e8a4b5d506a7feee672724ac6ab61cf568e849d16a87fcec36b0d94b7c7f86ac\" returns successfully" May 13 05:47:43.029301 systemd[1]: cri-containerd-e8a4b5d506a7feee672724ac6ab61cf568e849d16a87fcec36b0d94b7c7f86ac.scope: Deactivated successfully. May 13 05:47:43.075025 containerd[1460]: time="2025-05-13T05:47:43.074711020Z" level=info msg="shim disconnected" id=e8a4b5d506a7feee672724ac6ab61cf568e849d16a87fcec36b0d94b7c7f86ac namespace=k8s.io May 13 05:47:43.075025 containerd[1460]: time="2025-05-13T05:47:43.074848688Z" level=warning msg="cleaning up after shim disconnected" id=e8a4b5d506a7feee672724ac6ab61cf568e849d16a87fcec36b0d94b7c7f86ac namespace=k8s.io May 13 05:47:43.075025 containerd[1460]: time="2025-05-13T05:47:43.074864818Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:47:43.271567 containerd[1460]: time="2025-05-13T05:47:43.270370132Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 05:47:43.301380 containerd[1460]: time="2025-05-13T05:47:43.301036324Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"456a868cecbcc70dc43a5bfcb9ddd83783dcfb1d7889a9c1f5245f0b9285a532\"" May 13 05:47:43.305629 containerd[1460]: time="2025-05-13T05:47:43.305538147Z" level=info msg="StartContainer for \"456a868cecbcc70dc43a5bfcb9ddd83783dcfb1d7889a9c1f5245f0b9285a532\"" May 13 05:47:43.364435 systemd[1]: Started cri-containerd-456a868cecbcc70dc43a5bfcb9ddd83783dcfb1d7889a9c1f5245f0b9285a532.scope - libcontainer container 456a868cecbcc70dc43a5bfcb9ddd83783dcfb1d7889a9c1f5245f0b9285a532. May 13 05:47:43.398404 containerd[1460]: time="2025-05-13T05:47:43.398042244Z" level=info msg="StartContainer for \"456a868cecbcc70dc43a5bfcb9ddd83783dcfb1d7889a9c1f5245f0b9285a532\" returns successfully" May 13 05:47:43.404453 systemd[1]: cri-containerd-456a868cecbcc70dc43a5bfcb9ddd83783dcfb1d7889a9c1f5245f0b9285a532.scope: Deactivated successfully. May 13 05:47:43.436245 containerd[1460]: time="2025-05-13T05:47:43.436154267Z" level=info msg="shim disconnected" id=456a868cecbcc70dc43a5bfcb9ddd83783dcfb1d7889a9c1f5245f0b9285a532 namespace=k8s.io May 13 05:47:43.436715 containerd[1460]: time="2025-05-13T05:47:43.436544780Z" level=warning msg="cleaning up after shim disconnected" id=456a868cecbcc70dc43a5bfcb9ddd83783dcfb1d7889a9c1f5245f0b9285a532 namespace=k8s.io May 13 05:47:43.436715 containerd[1460]: time="2025-05-13T05:47:43.436582191Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:47:43.457182 containerd[1460]: time="2025-05-13T05:47:43.456681842Z" level=warning msg="cleanup warnings time=\"2025-05-13T05:47:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 05:47:44.133509 sshd[4363]: Accepted publickey for core from 172.24.4.1 port 56260 ssh2: RSA SHA256:+E8Eq1uMTdkzoHHj4Cx4DdKLDlGHP/AhkvM7vBSKHyU May 13 05:47:44.137688 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 05:47:44.148636 systemd-logind[1441]: New session 27 of user core. May 13 05:47:44.157590 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 05:47:44.286772 containerd[1460]: time="2025-05-13T05:47:44.286096223Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 05:47:44.351796 containerd[1460]: time="2025-05-13T05:47:44.351733110Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1\"" May 13 05:47:44.353367 containerd[1460]: time="2025-05-13T05:47:44.353326653Z" level=info msg="StartContainer for \"b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1\"" May 13 05:47:44.414451 systemd[1]: Started cri-containerd-b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1.scope - libcontainer container b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1. May 13 05:47:44.459611 containerd[1460]: time="2025-05-13T05:47:44.459547214Z" level=info msg="StartContainer for \"b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1\" returns successfully" May 13 05:47:44.493425 systemd[1]: cri-containerd-b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1.scope: Deactivated successfully. May 13 05:47:44.529824 containerd[1460]: time="2025-05-13T05:47:44.529732140Z" level=info msg="shim disconnected" id=b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1 namespace=k8s.io May 13 05:47:44.529824 containerd[1460]: time="2025-05-13T05:47:44.529796832Z" level=warning msg="cleaning up after shim disconnected" id=b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1 namespace=k8s.io May 13 05:47:44.529824 containerd[1460]: time="2025-05-13T05:47:44.529807031Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:47:44.644393 systemd[1]: run-containerd-runc-k8s.io-b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1-runc.iKiM54.mount: Deactivated successfully. May 13 05:47:44.644792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4a87992f0f150bde74789764115bed96d35349659c54fff622138f627e37ca1-rootfs.mount: Deactivated successfully. May 13 05:47:45.301006 containerd[1460]: time="2025-05-13T05:47:45.300654755Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 05:47:45.340420 containerd[1460]: time="2025-05-13T05:47:45.339001199Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e\"" May 13 05:47:45.339754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119879174.mount: Deactivated successfully. May 13 05:47:45.343151 containerd[1460]: time="2025-05-13T05:47:45.342641685Z" level=info msg="StartContainer for \"68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e\"" May 13 05:47:45.404505 systemd[1]: Started cri-containerd-68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e.scope - libcontainer container 68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e. May 13 05:47:45.435749 systemd[1]: cri-containerd-68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e.scope: Deactivated successfully. May 13 05:47:45.441339 containerd[1460]: time="2025-05-13T05:47:45.441068811Z" level=info msg="StartContainer for \"68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e\" returns successfully" May 13 05:47:45.473996 containerd[1460]: time="2025-05-13T05:47:45.473686417Z" level=info msg="shim disconnected" id=68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e namespace=k8s.io May 13 05:47:45.473996 containerd[1460]: time="2025-05-13T05:47:45.473756018Z" level=warning msg="cleaning up after shim disconnected" id=68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e namespace=k8s.io May 13 05:47:45.473996 containerd[1460]: time="2025-05-13T05:47:45.473769554Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 05:47:45.644913 systemd[1]: run-containerd-runc-k8s.io-68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e-runc.s73pML.mount: Deactivated successfully. May 13 05:47:45.645638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68bbabb5aa810620d8eef3ca022995023cade33044fcd981f2ecf20d27c3f60e-rootfs.mount: Deactivated successfully. May 13 05:47:45.755900 kubelet[2568]: E0513 05:47:45.755780 2568 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 05:47:46.338345 containerd[1460]: time="2025-05-13T05:47:46.335790318Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 05:47:46.376895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560976530.mount: Deactivated successfully. May 13 05:47:46.384520 containerd[1460]: time="2025-05-13T05:47:46.383964394Z" level=info msg="CreateContainer within sandbox \"45f4646ec0981815a332b8d5353156b0186ae7c613b11a9b3d490841651be99f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5246d8580e131a8fab0757def2e0519d8c780dc0cc148f9ded1d761d8d625a05\"" May 13 05:47:46.390389 containerd[1460]: time="2025-05-13T05:47:46.387739612Z" level=info msg="StartContainer for \"5246d8580e131a8fab0757def2e0519d8c780dc0cc148f9ded1d761d8d625a05\"" May 13 05:47:46.443494 systemd[1]: Started cri-containerd-5246d8580e131a8fab0757def2e0519d8c780dc0cc148f9ded1d761d8d625a05.scope - libcontainer container 5246d8580e131a8fab0757def2e0519d8c780dc0cc148f9ded1d761d8d625a05. May 13 05:47:46.492002 containerd[1460]: time="2025-05-13T05:47:46.491869116Z" level=info msg="StartContainer for \"5246d8580e131a8fab0757def2e0519d8c780dc0cc148f9ded1d761d8d625a05\" returns successfully" May 13 05:47:46.645880 systemd[1]: run-containerd-runc-k8s.io-5246d8580e131a8fab0757def2e0519d8c780dc0cc148f9ded1d761d8d625a05-runc.drJlTF.mount: Deactivated successfully. May 13 05:47:47.084488 kernel: cryptd: max_cpu_qlen set to 1000 May 13 05:47:47.151417 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 13 05:47:47.399665 kubelet[2568]: I0513 05:47:47.399460 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zvkgs" podStartSLOduration=7.399270514 podStartE2EDuration="7.399270514s" podCreationTimestamp="2025-05-13 05:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 05:47:47.397966707 +0000 UTC m=+377.025242200" watchObservedRunningTime="2025-05-13 05:47:47.399270514 +0000 UTC m=+377.026546007" May 13 05:47:50.744701 systemd-networkd[1363]: lxc_health: Link UP May 13 05:47:50.770382 systemd-networkd[1363]: lxc_health: Gained carrier May 13 05:47:51.512415 systemd[1]: run-containerd-runc-k8s.io-5246d8580e131a8fab0757def2e0519d8c780dc0cc148f9ded1d761d8d625a05-runc.aMrCnT.mount: Deactivated successfully. May 13 05:47:51.886751 systemd-networkd[1363]: lxc_health: Gained IPv6LL May 13 05:47:53.801164 systemd[1]: run-containerd-runc-k8s.io-5246d8580e131a8fab0757def2e0519d8c780dc0cc148f9ded1d761d8d625a05-runc.XEt7kk.mount: Deactivated successfully. May 13 05:47:53.878931 kubelet[2568]: E0513 05:47:53.878801 2568 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36988->127.0.0.1:44217: write tcp 127.0.0.1:36988->127.0.0.1:44217: write: connection reset by peer May 13 05:47:56.024498 systemd[1]: run-containerd-runc-k8s.io-5246d8580e131a8fab0757def2e0519d8c780dc0cc148f9ded1d761d8d625a05-runc.L2KdFg.mount: Deactivated successfully. May 13 05:47:56.425985 sshd[4363]: pam_unix(sshd:session): session closed for user core May 13 05:47:56.433681 systemd-logind[1441]: Session 27 logged out. Waiting for processes to exit. May 13 05:47:56.434575 systemd[1]: sshd@24-172.24.4.224:22-172.24.4.1:56260.service: Deactivated successfully. May 13 05:47:56.439519 systemd[1]: session-27.scope: Deactivated successfully. May 13 05:47:56.444298 systemd-logind[1441]: Removed session 27. May 13 05:48:30.568895 containerd[1460]: time="2025-05-13T05:48:30.568535559Z" level=info msg="StopPodSandbox for \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\"" May 13 05:48:30.568895 containerd[1460]: time="2025-05-13T05:48:30.582228427Z" level=info msg="TearDown network for sandbox \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\" successfully" May 13 05:48:30.568895 containerd[1460]: time="2025-05-13T05:48:30.582766507Z" level=info msg="StopPodSandbox for \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\" returns successfully" May 13 05:48:30.603030 containerd[1460]: time="2025-05-13T05:48:30.590278671Z" level=info msg="RemovePodSandbox for \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\"" May 13 05:48:30.603030 containerd[1460]: time="2025-05-13T05:48:30.590453770Z" level=info msg="Forcibly stopping sandbox \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\"" May 13 05:48:30.603030 containerd[1460]: time="2025-05-13T05:48:30.590674244Z" level=info msg="TearDown network for sandbox \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\" successfully" May 13 05:48:30.609609 containerd[1460]: time="2025-05-13T05:48:30.609427493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 05:48:30.609831 containerd[1460]: time="2025-05-13T05:48:30.609717807Z" level=info msg="RemovePodSandbox \"a1fc3402e613818a9f4f4aca2dcd6712ac964a3dfeee7529db24465dd13c3c3e\" returns successfully" May 13 05:48:30.611320 containerd[1460]: time="2025-05-13T05:48:30.611034209Z" level=info msg="StopPodSandbox for \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\"" May 13 05:48:30.611576 containerd[1460]: time="2025-05-13T05:48:30.611375380Z" level=info msg="TearDown network for sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" successfully" May 13 05:48:30.611576 containerd[1460]: time="2025-05-13T05:48:30.611413071Z" level=info msg="StopPodSandbox for \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" returns successfully" May 13 05:48:30.614512 containerd[1460]: time="2025-05-13T05:48:30.612395084Z" level=info msg="RemovePodSandbox for \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\"" May 13 05:48:30.614512 containerd[1460]: time="2025-05-13T05:48:30.612475786Z" level=info msg="Forcibly stopping sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\"" May 13 05:48:30.614512 containerd[1460]: time="2025-05-13T05:48:30.612620337Z" level=info msg="TearDown network for sandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" successfully" May 13 05:48:30.620553 containerd[1460]: time="2025-05-13T05:48:30.620465005Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 05:48:30.620896 containerd[1460]: time="2025-05-13T05:48:30.620846291Z" level=info msg="RemovePodSandbox \"6d76c81694c5bbc2995d80d7590befbd258f6d03cac63f9bf6d22684c0869e0c\" returns successfully"