May 13 02:21:30.046682 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:20:27 -00 2025 May 13 02:21:30.046711 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 13 02:21:30.046722 kernel: BIOS-provided physical RAM map: May 13 02:21:30.046730 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 02:21:30.046737 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 02:21:30.046748 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 02:21:30.046756 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 02:21:30.046764 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 02:21:30.046786 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 02:21:30.046794 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 02:21:30.046802 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 02:21:30.046810 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 02:21:30.046817 kernel: NX (Execute Disable) protection: active May 13 02:21:30.046825 kernel: APIC: Static calls initialized May 13 02:21:30.046850 kernel: SMBIOS 3.0.0 present. May 13 02:21:30.046858 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 02:21:30.046866 kernel: Hypervisor detected: KVM May 13 02:21:30.046874 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 02:21:30.046882 kernel: kvm-clock: using sched offset of 3698788832 cycles May 13 02:21:30.046891 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 02:21:30.046902 kernel: tsc: Detected 1996.249 MHz processor May 13 02:21:30.046911 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 02:21:30.046920 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 02:21:30.046928 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 02:21:30.046943 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 02:21:30.046952 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 02:21:30.046960 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 02:21:30.046969 kernel: ACPI: Early table checksum verification disabled May 13 02:21:30.046980 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 02:21:30.046988 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 02:21:30.046997 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 02:21:30.047005 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 02:21:30.047013 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 02:21:30.047022 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 02:21:30.047030 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 02:21:30.047038 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 02:21:30.047046 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 02:21:30.047058 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 02:21:30.047066 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 02:21:30.047075 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 02:21:30.047087 kernel: No NUMA configuration found May 13 02:21:30.047096 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 02:21:30.047104 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] May 13 02:21:30.047113 kernel: Zone ranges: May 13 02:21:30.047125 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 02:21:30.047133 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 02:21:30.047142 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 02:21:30.047150 kernel: Movable zone start for each node May 13 02:21:30.047159 kernel: Early memory node ranges May 13 02:21:30.047168 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 02:21:30.047176 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 02:21:30.047185 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 02:21:30.047197 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 02:21:30.047205 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 02:21:30.047214 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 02:21:30.047223 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 02:21:30.047231 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 02:21:30.047240 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 02:21:30.047249 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 02:21:30.047257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 02:21:30.047266 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 02:21:30.047278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 02:21:30.047287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 02:21:30.047296 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 02:21:30.047304 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 02:21:30.047313 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 02:21:30.047321 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 02:21:30.047330 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 02:21:30.047339 kernel: Booting paravirtualized kernel on KVM May 13 02:21:30.047348 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 02:21:30.047360 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 02:21:30.047369 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 02:21:30.047377 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 02:21:30.047386 kernel: pcpu-alloc: [0] 0 1 May 13 02:21:30.047394 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 02:21:30.047404 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 13 02:21:30.047413 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 02:21:30.047422 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 02:21:30.047434 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 02:21:30.047442 kernel: Fallback order for Node 0: 0 May 13 02:21:30.047451 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 02:21:30.047459 kernel: Policy zone: Normal May 13 02:21:30.047468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 02:21:30.047477 kernel: software IO TLB: area num 2. May 13 02:21:30.047486 kernel: Memory: 3962120K/4193772K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 231392K reserved, 0K cma-reserved) May 13 02:21:30.047494 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 02:21:30.047503 kernel: ftrace: allocating 37993 entries in 149 pages May 13 02:21:30.047515 kernel: ftrace: allocated 149 pages with 4 groups May 13 02:21:30.047523 kernel: Dynamic Preempt: voluntary May 13 02:21:30.047532 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 02:21:30.047541 kernel: rcu: RCU event tracing is enabled. May 13 02:21:30.047550 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 02:21:30.047559 kernel: Trampoline variant of Tasks RCU enabled. May 13 02:21:30.047568 kernel: Rude variant of Tasks RCU enabled. May 13 02:21:30.047577 kernel: Tracing variant of Tasks RCU enabled. May 13 02:21:30.047585 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 02:21:30.047597 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 02:21:30.047606 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 02:21:30.047614 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 02:21:30.047623 kernel: Console: colour VGA+ 80x25 May 13 02:21:30.047632 kernel: printk: console [tty0] enabled May 13 02:21:30.047640 kernel: printk: console [ttyS0] enabled May 13 02:21:30.047649 kernel: ACPI: Core revision 20230628 May 13 02:21:30.047657 kernel: APIC: Switch to symmetric I/O mode setup May 13 02:21:30.047666 kernel: x2apic enabled May 13 02:21:30.047678 kernel: APIC: Switched APIC routing to: physical x2apic May 13 02:21:30.047686 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 02:21:30.047695 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 02:21:30.047704 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 02:21:30.047713 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 02:21:30.047721 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 02:21:30.047730 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 02:21:30.047738 kernel: Spectre V2 : Mitigation: Retpolines May 13 02:21:30.047747 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 02:21:30.047759 kernel: Speculative Store Bypass: Vulnerable May 13 02:21:30.047768 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 02:21:30.047792 kernel: Freeing SMP alternatives memory: 32K May 13 02:21:30.047801 kernel: pid_max: default: 32768 minimum: 301 May 13 02:21:30.047820 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 02:21:30.047833 kernel: landlock: Up and running. May 13 02:21:30.047842 kernel: SELinux: Initializing. May 13 02:21:30.047851 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 02:21:30.047860 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 02:21:30.047869 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 02:21:30.047879 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 02:21:30.047888 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 02:21:30.047900 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 02:21:30.047909 kernel: Performance Events: AMD PMU driver. May 13 02:21:30.047918 kernel: ... version: 0 May 13 02:21:30.047927 kernel: ... bit width: 48 May 13 02:21:30.047939 kernel: ... generic registers: 4 May 13 02:21:30.047948 kernel: ... value mask: 0000ffffffffffff May 13 02:21:30.047957 kernel: ... max period: 00007fffffffffff May 13 02:21:30.047966 kernel: ... fixed-purpose events: 0 May 13 02:21:30.047975 kernel: ... event mask: 000000000000000f May 13 02:21:30.047984 kernel: signal: max sigframe size: 1440 May 13 02:21:30.047993 kernel: rcu: Hierarchical SRCU implementation. May 13 02:21:30.048002 kernel: rcu: Max phase no-delay instances is 400. May 13 02:21:30.048011 kernel: smp: Bringing up secondary CPUs ... May 13 02:21:30.048020 kernel: smpboot: x86: Booting SMP configuration: May 13 02:21:30.048032 kernel: .... node #0, CPUs: #1 May 13 02:21:30.048041 kernel: smp: Brought up 1 node, 2 CPUs May 13 02:21:30.048050 kernel: smpboot: Max logical packages: 2 May 13 02:21:30.048059 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 02:21:30.048068 kernel: devtmpfs: initialized May 13 02:21:30.048077 kernel: x86/mm: Memory block size: 128MB May 13 02:21:30.048087 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 02:21:30.048096 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 02:21:30.048105 kernel: pinctrl core: initialized pinctrl subsystem May 13 02:21:30.048117 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 02:21:30.048126 kernel: audit: initializing netlink subsys (disabled) May 13 02:21:30.048135 kernel: audit: type=2000 audit(1747102889.062:1): state=initialized audit_enabled=0 res=1 May 13 02:21:30.048144 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 02:21:30.048153 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 02:21:30.048162 kernel: cpuidle: using governor menu May 13 02:21:30.048171 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 02:21:30.048180 kernel: dca service started, version 1.12.1 May 13 02:21:30.048189 kernel: PCI: Using configuration type 1 for base access May 13 02:21:30.048202 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 02:21:30.048211 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 02:21:30.048220 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 02:21:30.048229 kernel: ACPI: Added _OSI(Module Device) May 13 02:21:30.048238 kernel: ACPI: Added _OSI(Processor Device) May 13 02:21:30.048247 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 02:21:30.048256 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 02:21:30.048265 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 02:21:30.048274 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 02:21:30.048286 kernel: ACPI: Interpreter enabled May 13 02:21:30.048295 kernel: ACPI: PM: (supports S0 S3 S5) May 13 02:21:30.048304 kernel: ACPI: Using IOAPIC for interrupt routing May 13 02:21:30.048313 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 02:21:30.048322 kernel: PCI: Using E820 reservations for host bridge windows May 13 02:21:30.048331 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 02:21:30.048340 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 02:21:30.048490 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 02:21:30.048595 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 02:21:30.048688 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 02:21:30.048702 kernel: acpiphp: Slot [3] registered May 13 02:21:30.048711 kernel: acpiphp: Slot [4] registered May 13 02:21:30.048720 kernel: acpiphp: Slot [5] registered May 13 02:21:30.048729 kernel: acpiphp: Slot [6] registered May 13 02:21:30.048738 kernel: acpiphp: Slot [7] registered May 13 02:21:30.048747 kernel: acpiphp: Slot [8] registered May 13 02:21:30.048761 kernel: acpiphp: Slot [9] registered May 13 02:21:30.048770 kernel: acpiphp: Slot [10] registered May 13 02:21:30.049366 kernel: acpiphp: Slot [11] registered May 13 02:21:30.049376 kernel: acpiphp: Slot [12] registered May 13 02:21:30.049385 kernel: acpiphp: Slot [13] registered May 13 02:21:30.049395 kernel: acpiphp: Slot [14] registered May 13 02:21:30.049404 kernel: acpiphp: Slot [15] registered May 13 02:21:30.049413 kernel: acpiphp: Slot [16] registered May 13 02:21:30.049422 kernel: acpiphp: Slot [17] registered May 13 02:21:30.049431 kernel: acpiphp: Slot [18] registered May 13 02:21:30.049445 kernel: acpiphp: Slot [19] registered May 13 02:21:30.049454 kernel: acpiphp: Slot [20] registered May 13 02:21:30.049463 kernel: acpiphp: Slot [21] registered May 13 02:21:30.049472 kernel: acpiphp: Slot [22] registered May 13 02:21:30.049481 kernel: acpiphp: Slot [23] registered May 13 02:21:30.049490 kernel: acpiphp: Slot [24] registered May 13 02:21:30.049499 kernel: acpiphp: Slot [25] registered May 13 02:21:30.049508 kernel: acpiphp: Slot [26] registered May 13 02:21:30.049517 kernel: acpiphp: Slot [27] registered May 13 02:21:30.049529 kernel: acpiphp: Slot [28] registered May 13 02:21:30.049538 kernel: acpiphp: Slot [29] registered May 13 02:21:30.049547 kernel: acpiphp: Slot [30] registered May 13 02:21:30.049556 kernel: acpiphp: Slot [31] registered May 13 02:21:30.049565 kernel: PCI host bridge to bus 0000:00 May 13 02:21:30.049671 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 02:21:30.049758 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 02:21:30.049873 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 02:21:30.049965 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 02:21:30.050046 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 02:21:30.050129 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 02:21:30.050249 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 02:21:30.050358 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 02:21:30.052884 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 02:21:30.052997 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 02:21:30.053095 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 02:21:30.053189 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 02:21:30.053284 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 02:21:30.053380 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 02:21:30.053483 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 02:21:30.053579 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 02:21:30.053680 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 02:21:30.053821 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 02:21:30.053924 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 02:21:30.054019 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 02:21:30.054114 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 02:21:30.054208 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 02:21:30.054305 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 02:21:30.054415 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 02:21:30.054512 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 02:21:30.054607 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 02:21:30.054708 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 02:21:30.055769 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 02:21:30.055902 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 02:21:30.056000 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 02:21:30.056102 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 02:21:30.056197 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 02:21:30.056308 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 02:21:30.056405 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 02:21:30.056500 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 02:21:30.056603 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 02:21:30.056698 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 02:21:30.057832 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 02:21:30.057937 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 02:21:30.057951 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 02:21:30.057961 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 02:21:30.057970 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 02:21:30.057980 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 02:21:30.057989 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 02:21:30.057998 kernel: iommu: Default domain type: Translated May 13 02:21:30.058013 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 02:21:30.058022 kernel: PCI: Using ACPI for IRQ routing May 13 02:21:30.058032 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 02:21:30.058041 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 02:21:30.058050 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 02:21:30.058143 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 02:21:30.058238 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 02:21:30.058330 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 02:21:30.058344 kernel: vgaarb: loaded May 13 02:21:30.058359 kernel: clocksource: Switched to clocksource kvm-clock May 13 02:21:30.058368 kernel: VFS: Disk quotas dquot_6.6.0 May 13 02:21:30.058378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 02:21:30.058387 kernel: pnp: PnP ACPI init May 13 02:21:30.058487 kernel: pnp 00:03: [dma 2] May 13 02:21:30.058502 kernel: pnp: PnP ACPI: found 5 devices May 13 02:21:30.058512 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 02:21:30.058522 kernel: NET: Registered PF_INET protocol family May 13 02:21:30.058535 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 02:21:30.058545 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 02:21:30.058554 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 02:21:30.058563 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 02:21:30.058573 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 02:21:30.058582 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 02:21:30.058592 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 02:21:30.058601 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 02:21:30.058610 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 02:21:30.058622 kernel: NET: Registered PF_XDP protocol family May 13 02:21:30.058707 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 02:21:30.060819 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 02:21:30.060910 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 02:21:30.060994 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 02:21:30.061075 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 02:21:30.061172 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 02:21:30.061269 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 02:21:30.061289 kernel: PCI: CLS 0 bytes, default 64 May 13 02:21:30.061299 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 02:21:30.061309 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 02:21:30.061318 kernel: Initialise system trusted keyrings May 13 02:21:30.061328 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 02:21:30.061338 kernel: Key type asymmetric registered May 13 02:21:30.061347 kernel: Asymmetric key parser 'x509' registered May 13 02:21:30.061356 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 02:21:30.061368 kernel: io scheduler mq-deadline registered May 13 02:21:30.061378 kernel: io scheduler kyber registered May 13 02:21:30.061387 kernel: io scheduler bfq registered May 13 02:21:30.061396 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 02:21:30.061407 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 02:21:30.061417 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 02:21:30.061427 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 02:21:30.061437 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 02:21:30.061446 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 02:21:30.061456 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 02:21:30.061468 kernel: random: crng init done May 13 02:21:30.061477 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 02:21:30.061487 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 02:21:30.061496 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 02:21:30.061590 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 02:21:30.061678 kernel: rtc_cmos 00:04: registered as rtc0 May 13 02:21:30.061692 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 02:21:30.061792 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T02:21:29 UTC (1747102889) May 13 02:21:30.061890 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 02:21:30.061903 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 02:21:30.061913 kernel: NET: Registered PF_INET6 protocol family May 13 02:21:30.061922 kernel: Segment Routing with IPv6 May 13 02:21:30.061931 kernel: In-situ OAM (IOAM) with IPv6 May 13 02:21:30.061940 kernel: NET: Registered PF_PACKET protocol family May 13 02:21:30.061950 kernel: Key type dns_resolver registered May 13 02:21:30.061959 kernel: IPI shorthand broadcast: enabled May 13 02:21:30.061968 kernel: sched_clock: Marking stable (974095280, 177143620)->(1179256455, -28017555) May 13 02:21:30.061982 kernel: registered taskstats version 1 May 13 02:21:30.061991 kernel: Loading compiled-in X.509 certificates May 13 02:21:30.062001 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 72bf95fdb9aed340290dd5f38e76c1ea0e6f32b4' May 13 02:21:30.062010 kernel: Key type .fscrypt registered May 13 02:21:30.062019 kernel: Key type fscrypt-provisioning registered May 13 02:21:30.062029 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 02:21:30.062038 kernel: ima: Allocated hash algorithm: sha1 May 13 02:21:30.062047 kernel: ima: No architecture policies found May 13 02:21:30.062058 kernel: clk: Disabling unused clocks May 13 02:21:30.062068 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 02:21:30.062077 kernel: Write protecting the kernel read-only data: 40960k May 13 02:21:30.062087 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 02:21:30.062096 kernel: Run /init as init process May 13 02:21:30.062105 kernel: with arguments: May 13 02:21:30.062114 kernel: /init May 13 02:21:30.062123 kernel: with environment: May 13 02:21:30.062132 kernel: HOME=/ May 13 02:21:30.062144 kernel: TERM=linux May 13 02:21:30.062153 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 02:21:30.062164 systemd[1]: Successfully made /usr/ read-only. May 13 02:21:30.062178 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 02:21:30.062189 systemd[1]: Detected virtualization kvm. May 13 02:21:30.062199 systemd[1]: Detected architecture x86-64. May 13 02:21:30.062208 systemd[1]: Running in initrd. May 13 02:21:30.062220 systemd[1]: No hostname configured, using default hostname. May 13 02:21:30.062231 systemd[1]: Hostname set to . May 13 02:21:30.062240 systemd[1]: Initializing machine ID from VM UUID. May 13 02:21:30.062250 systemd[1]: Queued start job for default target initrd.target. May 13 02:21:30.062260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 02:21:30.062270 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 02:21:30.062281 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 02:21:30.062301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 02:21:30.062313 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 02:21:30.062324 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 02:21:30.062336 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 02:21:30.062346 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 02:21:30.062356 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 02:21:30.062369 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 02:21:30.062379 systemd[1]: Reached target paths.target - Path Units. May 13 02:21:30.062389 systemd[1]: Reached target slices.target - Slice Units. May 13 02:21:30.062400 systemd[1]: Reached target swap.target - Swaps. May 13 02:21:30.062410 systemd[1]: Reached target timers.target - Timer Units. May 13 02:21:30.062420 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 02:21:30.062430 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 02:21:30.062441 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 02:21:30.062453 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 02:21:30.062463 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 02:21:30.062473 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 02:21:30.062484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 02:21:30.062494 systemd[1]: Reached target sockets.target - Socket Units. May 13 02:21:30.062504 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 02:21:30.062514 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 02:21:30.062525 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 02:21:30.062535 systemd[1]: Starting systemd-fsck-usr.service... May 13 02:21:30.062547 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 02:21:30.062557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 02:21:30.062567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 02:21:30.062578 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 02:21:30.062609 systemd-journald[185]: Collecting audit messages is disabled. May 13 02:21:30.062638 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 02:21:30.062649 systemd[1]: Finished systemd-fsck-usr.service. May 13 02:21:30.062660 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 02:21:30.062672 systemd-journald[185]: Journal started May 13 02:21:30.062696 systemd-journald[185]: Runtime Journal (/run/log/journal/b790b546a7df45f4825f158485c5492e) is 8M, max 78.2M, 70.2M free. May 13 02:21:30.039682 systemd-modules-load[186]: Inserted module 'overlay' May 13 02:21:30.079807 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 02:21:30.081517 systemd-modules-load[186]: Inserted module 'br_netfilter' May 13 02:21:30.110754 kernel: Bridge firewalling registered May 13 02:21:30.120831 systemd[1]: Started systemd-journald.service - Journal Service. May 13 02:21:30.121570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 02:21:30.123211 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 02:21:30.123960 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 02:21:30.127889 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 02:21:30.129877 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 02:21:30.134905 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 02:21:30.139086 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 02:21:30.150177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 02:21:30.154338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 02:21:30.155715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 02:21:30.159967 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 02:21:30.160716 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 02:21:30.162899 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 02:21:30.181820 dracut-cmdline[222]: dracut-dracut-053 May 13 02:21:30.185796 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 13 02:21:30.205985 systemd-resolved[220]: Positive Trust Anchors: May 13 02:21:30.206707 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 02:21:30.207557 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 02:21:30.212676 systemd-resolved[220]: Defaulting to hostname 'linux'. May 13 02:21:30.213614 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 02:21:30.214491 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 02:21:30.250810 kernel: SCSI subsystem initialized May 13 02:21:30.262811 kernel: Loading iSCSI transport class v2.0-870. May 13 02:21:30.275816 kernel: iscsi: registered transport (tcp) May 13 02:21:30.299925 kernel: iscsi: registered transport (qla4xxx) May 13 02:21:30.300010 kernel: QLogic iSCSI HBA Driver May 13 02:21:30.359026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 02:21:30.363714 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 02:21:30.429063 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 02:21:30.429201 kernel: device-mapper: uevent: version 1.0.3 May 13 02:21:30.432568 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 02:21:30.494903 kernel: raid6: sse2x4 gen() 5181 MB/s May 13 02:21:30.513852 kernel: raid6: sse2x2 gen() 5989 MB/s May 13 02:21:30.532288 kernel: raid6: sse2x1 gen() 7491 MB/s May 13 02:21:30.532381 kernel: raid6: using algorithm sse2x1 gen() 7491 MB/s May 13 02:21:30.551204 kernel: raid6: .... xor() 7180 MB/s, rmw enabled May 13 02:21:30.551267 kernel: raid6: using ssse3x2 recovery algorithm May 13 02:21:30.575256 kernel: xor: measuring software checksum speed May 13 02:21:30.575320 kernel: prefetch64-sse : 18520 MB/sec May 13 02:21:30.576529 kernel: generic_sse : 16772 MB/sec May 13 02:21:30.576592 kernel: xor: using function: prefetch64-sse (18520 MB/sec) May 13 02:21:30.758001 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 02:21:30.776449 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 02:21:30.781670 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 02:21:30.812266 systemd-udevd[406]: Using default interface naming scheme 'v255'. May 13 02:21:30.817286 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 02:21:30.824456 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 02:21:30.857302 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation May 13 02:21:30.902042 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 02:21:30.907388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 02:21:30.965929 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 02:21:30.973763 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 02:21:31.026281 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 02:21:31.030558 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 02:21:31.031677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 02:21:31.032276 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 02:21:31.034936 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 02:21:31.059398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 02:21:31.071832 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 13 02:21:31.080970 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 02:21:31.085328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 02:21:31.085464 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 02:21:31.088004 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 02:21:31.090978 kernel: libata version 3.00 loaded. May 13 02:21:31.088940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 02:21:31.100273 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 02:21:31.100291 kernel: GPT:17805311 != 20971519 May 13 02:21:31.100308 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 02:21:31.100320 kernel: GPT:17805311 != 20971519 May 13 02:21:31.100331 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 02:21:31.100481 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 02:21:31.100498 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 02:21:31.100510 kernel: scsi host0: ata_piix May 13 02:21:31.089087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 02:21:31.108949 kernel: scsi host1: ata_piix May 13 02:21:31.109114 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 02:21:31.109129 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 02:21:31.089881 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 02:21:31.101639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 02:21:31.112192 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 02:21:31.167326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 02:21:31.170215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 02:21:31.200195 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 02:21:31.315980 kernel: BTRFS: device fsid d5ab0fb8-9c4f-4805-8fe7-b120550325cd devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (464) May 13 02:21:31.332830 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (465) May 13 02:21:31.340002 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 02:21:31.366134 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 02:21:31.375073 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 02:21:31.375700 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 02:21:31.387806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 02:21:31.390900 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 02:21:31.423269 disk-uuid[516]: Primary Header is updated. May 13 02:21:31.423269 disk-uuid[516]: Secondary Entries is updated. May 13 02:21:31.423269 disk-uuid[516]: Secondary Header is updated. May 13 02:21:31.436823 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 02:21:32.456869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 02:21:32.459130 disk-uuid[517]: The operation has completed successfully. May 13 02:21:32.542225 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 02:21:32.542356 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 02:21:32.592530 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 02:21:32.614168 sh[528]: Success May 13 02:21:32.649161 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 02:21:32.752009 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 02:21:32.760984 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 02:21:32.776428 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 02:21:32.808065 kernel: BTRFS info (device dm-0): first mount of filesystem d5ab0fb8-9c4f-4805-8fe7-b120550325cd May 13 02:21:32.808174 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 02:21:32.812925 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 02:21:32.818056 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 02:21:32.821312 kernel: BTRFS info (device dm-0): using free space tree May 13 02:21:32.837319 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 02:21:32.838321 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 02:21:32.840880 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 02:21:32.843206 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 02:21:32.873995 kernel: BTRFS info (device vda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 02:21:32.874044 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 02:21:32.874057 kernel: BTRFS info (device vda6): using free space tree May 13 02:21:32.881794 kernel: BTRFS info (device vda6): auto enabling async discard May 13 02:21:32.889817 kernel: BTRFS info (device vda6): last unmount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 02:21:32.898572 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 02:21:32.901906 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 02:21:32.971794 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 02:21:32.974356 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 02:21:33.039841 systemd-networkd[707]: lo: Link UP May 13 02:21:33.039853 systemd-networkd[707]: lo: Gained carrier May 13 02:21:33.043509 systemd-networkd[707]: Enumeration completed May 13 02:21:33.043621 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 02:21:33.044361 systemd[1]: Reached target network.target - Network. May 13 02:21:33.044727 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 02:21:33.044730 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 02:21:33.046683 systemd-networkd[707]: eth0: Link UP May 13 02:21:33.046688 systemd-networkd[707]: eth0: Gained carrier May 13 02:21:33.046697 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 02:21:33.056092 ignition[624]: Ignition 2.20.0 May 13 02:21:33.056108 ignition[624]: Stage: fetch-offline May 13 02:21:33.056160 ignition[624]: no configs at "/usr/lib/ignition/base.d" May 13 02:21:33.057401 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 02:21:33.056173 ignition[624]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 02:21:33.056278 ignition[624]: parsed url from cmdline: "" May 13 02:21:33.059853 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.210/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 02:21:33.056282 ignition[624]: no config URL provided May 13 02:21:33.056288 ignition[624]: reading system config file "/usr/lib/ignition/user.ign" May 13 02:21:33.056296 ignition[624]: no config at "/usr/lib/ignition/user.ign" May 13 02:21:33.061654 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 02:21:33.056302 ignition[624]: failed to fetch config: resource requires networking May 13 02:21:33.056481 ignition[624]: Ignition finished successfully May 13 02:21:33.087203 ignition[718]: Ignition 2.20.0 May 13 02:21:33.087842 ignition[718]: Stage: fetch May 13 02:21:33.088087 ignition[718]: no configs at "/usr/lib/ignition/base.d" May 13 02:21:33.088100 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 02:21:33.088201 ignition[718]: parsed url from cmdline: "" May 13 02:21:33.088206 ignition[718]: no config URL provided May 13 02:21:33.088212 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" May 13 02:21:33.088220 ignition[718]: no config at "/usr/lib/ignition/user.ign" May 13 02:21:33.088369 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 02:21:33.088568 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 02:21:33.088594 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 02:21:33.341228 ignition[718]: GET result: OK May 13 02:21:33.341407 ignition[718]: parsing config with SHA512: ae1366d96606a111c8d3a73df6a13e96f38c8ade66703893216668a40c5746f16813b747fe544352d7e8b343e0f6b55d55083caea759754bbe7c8c70e55776d4 May 13 02:21:33.353838 unknown[718]: fetched base config from "system" May 13 02:21:33.353857 unknown[718]: fetched base config from "system" May 13 02:21:33.353878 unknown[718]: fetched user config from "openstack" May 13 02:21:33.356077 ignition[718]: fetch: fetch complete May 13 02:21:33.356093 ignition[718]: fetch: fetch passed May 13 02:21:33.356224 ignition[718]: Ignition finished successfully May 13 02:21:33.360374 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 02:21:33.364224 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 02:21:33.408315 ignition[725]: Ignition 2.20.0 May 13 02:21:33.408336 ignition[725]: Stage: kargs May 13 02:21:33.408771 ignition[725]: no configs at "/usr/lib/ignition/base.d" May 13 02:21:33.412833 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 02:21:33.408833 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 02:21:33.410651 ignition[725]: kargs: kargs passed May 13 02:21:33.410731 ignition[725]: Ignition finished successfully May 13 02:21:33.419063 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 02:21:33.459477 ignition[731]: Ignition 2.20.0 May 13 02:21:33.461951 ignition[731]: Stage: disks May 13 02:21:33.462502 ignition[731]: no configs at "/usr/lib/ignition/base.d" May 13 02:21:33.462533 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 02:21:33.465379 ignition[731]: disks: disks passed May 13 02:21:33.465479 ignition[731]: Ignition finished successfully May 13 02:21:33.468342 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 02:21:33.470340 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 02:21:33.472596 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 02:21:33.475577 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 02:21:33.478516 systemd[1]: Reached target sysinit.target - System Initialization. May 13 02:21:33.481060 systemd[1]: Reached target basic.target - Basic System. May 13 02:21:33.485684 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 02:21:33.538372 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 13 02:21:33.550155 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 02:21:33.557028 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 02:21:33.725827 kernel: EXT4-fs (vda9): mounted filesystem c9958eea-1ed5-48cc-be53-8e1c8ef051da r/w with ordered data mode. Quota mode: none. May 13 02:21:33.726387 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 02:21:33.728025 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 02:21:33.731133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 02:21:33.733871 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 02:21:33.735119 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 02:21:33.737905 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 13 02:21:33.739288 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 02:21:33.740179 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 02:21:33.748500 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 02:21:33.750922 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 02:21:33.772813 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) May 13 02:21:33.785491 kernel: BTRFS info (device vda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 02:21:33.785531 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 02:21:33.785544 kernel: BTRFS info (device vda6): using free space tree May 13 02:21:33.799488 kernel: BTRFS info (device vda6): auto enabling async discard May 13 02:21:33.805361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 02:21:33.876149 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory May 13 02:21:33.887257 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory May 13 02:21:33.896587 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory May 13 02:21:33.903448 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory May 13 02:21:33.992662 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 02:21:33.995932 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 02:21:33.997911 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 02:21:34.011963 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 02:21:34.014820 kernel: BTRFS info (device vda6): last unmount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 02:21:34.038871 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 02:21:34.043285 ignition[863]: INFO : Ignition 2.20.0 May 13 02:21:34.044612 ignition[863]: INFO : Stage: mount May 13 02:21:34.044612 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 02:21:34.044612 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 02:21:34.047790 ignition[863]: INFO : mount: mount passed May 13 02:21:34.047790 ignition[863]: INFO : Ignition finished successfully May 13 02:21:34.046123 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 02:21:34.603255 systemd-networkd[707]: eth0: Gained IPv6LL May 13 02:21:40.949240 coreos-metadata[749]: May 13 02:21:40.949 WARN failed to locate config-drive, using the metadata service API instead May 13 02:21:40.991555 coreos-metadata[749]: May 13 02:21:40.991 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 02:21:41.005343 coreos-metadata[749]: May 13 02:21:41.005 INFO Fetch successful May 13 02:21:41.006770 coreos-metadata[749]: May 13 02:21:41.006 INFO wrote hostname ci-4284-0-0-n-0dbb4c7115.novalocal to /sysroot/etc/hostname May 13 02:21:41.009141 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 02:21:41.009359 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 13 02:21:41.016965 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 02:21:41.049289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 02:21:41.081903 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (881) May 13 02:21:41.092929 kernel: BTRFS info (device vda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 02:21:41.093051 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 02:21:41.094299 kernel: BTRFS info (device vda6): using free space tree May 13 02:21:41.105831 kernel: BTRFS info (device vda6): auto enabling async discard May 13 02:21:41.112142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 02:21:41.152742 ignition[898]: INFO : Ignition 2.20.0 May 13 02:21:41.152742 ignition[898]: INFO : Stage: files May 13 02:21:41.156951 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 02:21:41.156951 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 02:21:41.160546 ignition[898]: DEBUG : files: compiled without relabeling support, skipping May 13 02:21:41.160546 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 02:21:41.160546 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 02:21:41.166577 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 02:21:41.168479 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 02:21:41.170471 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 02:21:41.168876 unknown[898]: wrote ssh authorized keys file for user: core May 13 02:21:41.173356 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 02:21:41.173356 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 02:21:41.242883 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 02:21:41.530564 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 02:21:41.530564 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 02:21:41.530564 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 02:21:42.316912 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 02:21:42.911996 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 02:21:42.911996 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 02:21:42.916871 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 02:21:43.488106 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 02:21:45.971042 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 02:21:45.971042 ignition[898]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 02:21:45.971042 ignition[898]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 02:21:45.971042 ignition[898]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 02:21:45.971042 ignition[898]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 02:21:45.971042 ignition[898]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 13 02:21:45.971042 ignition[898]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 13 02:21:45.992065 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 02:21:45.992065 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 02:21:45.992065 ignition[898]: INFO : files: files passed May 13 02:21:45.992065 ignition[898]: INFO : Ignition finished successfully May 13 02:21:45.975933 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 02:21:45.983018 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 02:21:45.987931 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 02:21:46.008726 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 02:21:46.013219 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 02:21:46.013219 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 02:21:46.008870 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 02:21:46.021256 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 02:21:46.020080 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 02:21:46.022115 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 02:21:46.026912 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 02:21:46.086740 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 02:21:46.087078 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 02:21:46.090393 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 02:21:46.092203 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 02:21:46.094751 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 02:21:46.097032 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 02:21:46.137585 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 02:21:46.143020 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 02:21:46.184528 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 02:21:46.186350 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 02:21:46.189512 systemd[1]: Stopped target timers.target - Timer Units. May 13 02:21:46.192315 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 02:21:46.192634 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 02:21:46.195659 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 02:21:46.197452 systemd[1]: Stopped target basic.target - Basic System. May 13 02:21:46.200302 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 02:21:46.202914 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 02:21:46.205412 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 02:21:46.208369 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 02:21:46.211297 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 02:21:46.214292 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 02:21:46.217154 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 02:21:46.220135 systemd[1]: Stopped target swap.target - Swaps. May 13 02:21:46.222749 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 02:21:46.223149 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 02:21:46.226124 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 02:21:46.228054 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 02:21:46.230433 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 02:21:46.231285 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 02:21:46.233555 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 02:21:46.233915 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 02:21:46.237690 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 02:21:46.238063 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 02:21:46.241015 systemd[1]: ignition-files.service: Deactivated successfully. May 13 02:21:46.241290 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 02:21:46.248213 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 02:21:46.249726 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 02:21:46.250240 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 02:21:46.260574 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 02:21:46.263647 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 02:21:46.264442 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 02:21:46.269193 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 02:21:46.269371 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 02:21:46.281490 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 02:21:46.281890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 02:21:46.297809 ignition[953]: INFO : Ignition 2.20.0 May 13 02:21:46.297809 ignition[953]: INFO : Stage: umount May 13 02:21:46.297809 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 02:21:46.297809 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 02:21:46.302837 ignition[953]: INFO : umount: umount passed May 13 02:21:46.302837 ignition[953]: INFO : Ignition finished successfully May 13 02:21:46.299288 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 02:21:46.299398 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 02:21:46.302312 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 02:21:46.303877 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 02:21:46.303990 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 02:21:46.304751 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 02:21:46.304823 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 02:21:46.305300 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 02:21:46.305346 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 02:21:46.305835 systemd[1]: Stopped target network.target - Network. May 13 02:21:46.306262 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 02:21:46.306310 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 02:21:46.307442 systemd[1]: Stopped target paths.target - Path Units. May 13 02:21:46.308350 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 02:21:46.311815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 02:21:46.312559 systemd[1]: Stopped target slices.target - Slice Units. May 13 02:21:46.313547 systemd[1]: Stopped target sockets.target - Socket Units. May 13 02:21:46.314684 systemd[1]: iscsid.socket: Deactivated successfully. May 13 02:21:46.314721 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 02:21:46.315872 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 02:21:46.315905 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 02:21:46.316847 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 02:21:46.316890 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 02:21:46.318004 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 02:21:46.318044 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 02:21:46.319386 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 02:21:46.320462 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 02:21:46.321852 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 02:21:46.322976 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 02:21:46.323954 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 02:21:46.324144 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 02:21:46.327275 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 02:21:46.329330 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 02:21:46.329576 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 02:21:46.331893 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 02:21:46.332325 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 02:21:46.332489 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 02:21:46.333260 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 02:21:46.333306 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 02:21:46.339861 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 02:21:46.340679 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 02:21:46.340730 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 02:21:46.342908 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 02:21:46.342954 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 02:21:46.343908 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 02:21:46.343951 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 02:21:46.345296 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 02:21:46.345341 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 02:21:46.346901 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 02:21:46.348590 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 02:21:46.348651 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 02:21:46.356153 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 02:21:46.356321 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 02:21:46.358184 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 02:21:46.358255 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 02:21:46.358908 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 02:21:46.358938 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 02:21:46.360114 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 02:21:46.360162 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 02:21:46.361878 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 02:21:46.361922 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 02:21:46.364250 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 02:21:46.364299 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 02:21:46.366912 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 02:21:46.367981 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 02:21:46.368030 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 02:21:46.369861 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 02:21:46.369904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 02:21:46.373018 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 02:21:46.373090 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 02:21:46.373422 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 02:21:46.373566 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 02:21:46.383353 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 02:21:46.383457 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 02:21:46.385058 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 02:21:46.386949 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 02:21:46.403995 systemd[1]: Switching root. May 13 02:21:46.442949 systemd-journald[185]: Journal stopped May 13 02:21:48.369053 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 13 02:21:48.369131 kernel: SELinux: policy capability network_peer_controls=1 May 13 02:21:48.369150 kernel: SELinux: policy capability open_perms=1 May 13 02:21:48.369166 kernel: SELinux: policy capability extended_socket_class=1 May 13 02:21:48.369178 kernel: SELinux: policy capability always_check_network=0 May 13 02:21:48.369190 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 02:21:48.369201 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 02:21:48.369218 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 02:21:48.369230 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 02:21:48.369241 kernel: audit: type=1403 audit(1747102907.235:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 02:21:48.369253 systemd[1]: Successfully loaded SELinux policy in 83.690ms. May 13 02:21:48.369282 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.028ms. May 13 02:21:48.369301 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 02:21:48.369313 systemd[1]: Detected virtualization kvm. May 13 02:21:48.369326 systemd[1]: Detected architecture x86-64. May 13 02:21:48.369338 systemd[1]: Detected first boot. May 13 02:21:48.369350 systemd[1]: Hostname set to . May 13 02:21:48.369362 systemd[1]: Initializing machine ID from VM UUID. May 13 02:21:48.369374 zram_generator::config[998]: No configuration found. May 13 02:21:48.369389 kernel: Guest personality initialized and is inactive May 13 02:21:48.369401 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 02:21:48.369412 kernel: Initialized host personality May 13 02:21:48.369424 kernel: NET: Registered PF_VSOCK protocol family May 13 02:21:48.369436 systemd[1]: Populated /etc with preset unit settings. May 13 02:21:48.369448 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 02:21:48.369461 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 02:21:48.369473 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 02:21:48.369485 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 02:21:48.369500 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 02:21:48.369512 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 02:21:48.369525 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 02:21:48.369537 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 02:21:48.369550 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 02:21:48.369562 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 02:21:48.369574 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 02:21:48.369586 systemd[1]: Created slice user.slice - User and Session Slice. May 13 02:21:48.369598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 02:21:48.369613 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 02:21:48.369625 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 02:21:48.369637 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 02:21:48.369650 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 02:21:48.369663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 02:21:48.369675 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 02:21:48.369689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 02:21:48.369703 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 02:21:48.369716 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 02:21:48.369728 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 02:21:48.369740 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 02:21:48.369753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 02:21:48.369766 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 02:21:48.371666 systemd[1]: Reached target slices.target - Slice Units. May 13 02:21:48.371688 systemd[1]: Reached target swap.target - Swaps. May 13 02:21:48.371706 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 02:21:48.371720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 02:21:48.371734 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 02:21:48.371747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 02:21:48.371761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 02:21:48.371787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 02:21:48.371802 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 02:21:48.371815 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 02:21:48.371833 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 02:21:48.371850 systemd[1]: Mounting media.mount - External Media Directory... May 13 02:21:48.371863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 02:21:48.371877 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 02:21:48.371891 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 02:21:48.371904 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 02:21:48.371918 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 02:21:48.371931 systemd[1]: Reached target machines.target - Containers. May 13 02:21:48.371944 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 02:21:48.371960 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 02:21:48.371973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 02:21:48.371986 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 02:21:48.371999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 02:21:48.372012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 02:21:48.372025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 02:21:48.372039 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 02:21:48.372052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 02:21:48.372065 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 02:21:48.372081 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 02:21:48.372095 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 02:21:48.372108 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 02:21:48.372121 systemd[1]: Stopped systemd-fsck-usr.service. May 13 02:21:48.372134 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 02:21:48.372148 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 02:21:48.372161 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 02:21:48.372175 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 02:21:48.372190 kernel: fuse: init (API version 7.39) May 13 02:21:48.372202 kernel: loop: module loaded May 13 02:21:48.372215 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 02:21:48.372228 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 02:21:48.372241 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 02:21:48.372254 systemd[1]: verity-setup.service: Deactivated successfully. May 13 02:21:48.372268 systemd[1]: Stopped verity-setup.service. May 13 02:21:48.372284 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 02:21:48.372298 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 02:21:48.372311 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 02:21:48.372328 systemd[1]: Mounted media.mount - External Media Directory. May 13 02:21:48.372341 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 02:21:48.372355 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 02:21:48.372368 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 02:21:48.372382 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 02:21:48.372395 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 02:21:48.372408 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 02:21:48.372421 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 02:21:48.372434 kernel: ACPI: bus type drm_connector registered May 13 02:21:48.372448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 02:21:48.372462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 02:21:48.372475 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 02:21:48.372489 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 02:21:48.372503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 02:21:48.372545 systemd-journald[1092]: Collecting audit messages is disabled. May 13 02:21:48.372577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 02:21:48.372591 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 02:21:48.372609 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 02:21:48.372622 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 02:21:48.372635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 02:21:48.372649 systemd-journald[1092]: Journal started May 13 02:21:48.372677 systemd-journald[1092]: Runtime Journal (/run/log/journal/b790b546a7df45f4825f158485c5492e) is 8M, max 78.2M, 70.2M free. May 13 02:21:47.967414 systemd[1]: Queued start job for default target multi-user.target. May 13 02:21:47.975929 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 02:21:47.976339 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 02:21:48.379809 systemd[1]: Started systemd-journald.service - Journal Service. May 13 02:21:48.382142 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 02:21:48.383007 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 02:21:48.383943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 02:21:48.385224 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 02:21:48.396591 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 02:21:48.401878 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 02:21:48.405993 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 02:21:48.406643 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 02:21:48.406750 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 02:21:48.408629 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 02:21:48.414694 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 02:21:48.419607 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 02:21:48.421072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 02:21:48.424928 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 02:21:48.428677 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 02:21:48.429624 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 02:21:48.432929 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 02:21:48.434046 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 02:21:48.435264 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 02:21:48.439013 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 02:21:48.447001 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 02:21:48.451709 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 02:21:48.453016 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 02:21:48.454052 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 02:21:48.455280 systemd-journald[1092]: Time spent on flushing to /var/log/journal/b790b546a7df45f4825f158485c5492e is 36.381ms for 961 entries. May 13 02:21:48.455280 systemd-journald[1092]: System Journal (/var/log/journal/b790b546a7df45f4825f158485c5492e) is 8M, max 584.8M, 576.8M free. May 13 02:21:48.520412 systemd-journald[1092]: Received client request to flush runtime journal. May 13 02:21:48.520480 kernel: loop0: detected capacity change from 0 to 109808 May 13 02:21:48.460761 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 02:21:48.470896 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 02:21:48.472514 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 02:21:48.475069 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 02:21:48.482615 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 02:21:48.492093 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 02:21:48.521691 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 02:21:48.523470 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 02:21:48.559811 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 02:21:48.578018 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 02:21:48.587986 kernel: loop1: detected capacity change from 0 to 151640 May 13 02:21:48.593668 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 02:21:48.599097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 02:21:48.653365 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 13 02:21:48.654123 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 13 02:21:48.663863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 02:21:48.684087 kernel: loop2: detected capacity change from 0 to 8 May 13 02:21:48.708423 kernel: loop3: detected capacity change from 0 to 210664 May 13 02:21:48.767825 kernel: loop4: detected capacity change from 0 to 109808 May 13 02:21:48.833865 kernel: loop5: detected capacity change from 0 to 151640 May 13 02:21:48.886672 kernel: loop6: detected capacity change from 0 to 8 May 13 02:21:48.886771 kernel: loop7: detected capacity change from 0 to 210664 May 13 02:21:48.947505 (sd-merge)[1164]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 13 02:21:48.948390 (sd-merge)[1164]: Merged extensions into '/usr'. May 13 02:21:48.958230 systemd[1]: Reload requested from client PID 1137 ('systemd-sysext') (unit systemd-sysext.service)... May 13 02:21:48.958257 systemd[1]: Reloading... May 13 02:21:49.048838 zram_generator::config[1188]: No configuration found. May 13 02:21:49.303240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 02:21:49.385695 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 02:21:49.386270 systemd[1]: Reloading finished in 427 ms. May 13 02:21:49.413864 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 02:21:49.415334 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 02:21:49.430156 systemd[1]: Starting ensure-sysext.service... May 13 02:21:49.434060 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 02:21:49.438983 ldconfig[1132]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 02:21:49.440158 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 02:21:49.452530 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 02:21:49.472867 systemd[1]: Reload requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... May 13 02:21:49.472884 systemd[1]: Reloading... May 13 02:21:49.495848 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 02:21:49.496093 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 02:21:49.496882 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 02:21:49.497154 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 13 02:21:49.497217 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 13 02:21:49.506605 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. May 13 02:21:49.506618 systemd-tmpfiles[1249]: Skipping /boot May 13 02:21:49.520593 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. May 13 02:21:49.520605 systemd-tmpfiles[1249]: Skipping /boot May 13 02:21:49.525405 systemd-udevd[1250]: Using default interface naming scheme 'v255'. May 13 02:21:49.568816 zram_generator::config[1277]: No configuration found. May 13 02:21:49.706894 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1294) May 13 02:21:49.794295 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 02:21:49.794386 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 02:21:49.810866 kernel: ACPI: button: Power Button [PWRF] May 13 02:21:49.833625 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 02:21:49.849576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 02:21:49.903813 kernel: mousedev: PS/2 mouse device common for all mice May 13 02:21:49.913599 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 13 02:21:49.913637 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 13 02:21:49.920826 kernel: Console: switching to colour dummy device 80x25 May 13 02:21:49.920867 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 13 02:21:49.920887 kernel: [drm] features: -context_init May 13 02:21:49.920904 kernel: [drm] number of scanouts: 1 May 13 02:21:49.922257 kernel: [drm] number of cap sets: 0 May 13 02:21:49.926830 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 13 02:21:49.935135 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 13 02:21:49.935220 kernel: Console: switching to colour frame buffer device 160x50 May 13 02:21:49.940793 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 13 02:21:49.983561 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 02:21:49.983951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 02:21:49.986383 systemd[1]: Reloading finished in 513 ms. May 13 02:21:49.999317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 02:21:49.999791 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 02:21:50.051697 systemd[1]: Finished ensure-sysext.service. May 13 02:21:50.064092 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 02:21:50.104611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 02:21:50.107817 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 02:21:50.114990 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 02:21:50.115393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 02:21:50.118997 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 02:21:50.125158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 02:21:50.130984 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 02:21:50.137292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 02:21:50.155055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 02:21:50.155294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 02:21:50.157209 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 02:21:50.158077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 02:21:50.159516 lvm[1372]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 02:21:50.160694 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 02:21:50.164915 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 02:21:50.173002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 02:21:50.178478 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 02:21:50.186987 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 02:21:50.192719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 02:21:50.193860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 02:21:50.198543 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 02:21:50.200121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 02:21:50.200833 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 02:21:50.201140 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 02:21:50.201289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 02:21:50.201554 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 02:21:50.201700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 02:21:50.220935 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 02:21:50.227909 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 02:21:50.228474 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 02:21:50.233216 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 02:21:50.234643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 02:21:50.237057 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 02:21:50.238100 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 02:21:50.242615 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 02:21:50.260253 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 02:21:50.265042 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 02:21:50.286460 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 02:21:50.296039 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 02:21:50.299005 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 02:21:50.317399 augenrules[1419]: No rules May 13 02:21:50.321199 systemd[1]: audit-rules.service: Deactivated successfully. May 13 02:21:50.321771 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 02:21:50.328011 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 02:21:50.337272 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 02:21:50.350429 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 02:21:50.356353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 02:21:50.358199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 02:21:50.422067 systemd-resolved[1386]: Positive Trust Anchors: May 13 02:21:50.422081 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 02:21:50.422124 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 02:21:50.426590 systemd-resolved[1386]: Using system hostname 'ci-4284-0-0-n-0dbb4c7115.novalocal'. May 13 02:21:50.428605 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 02:21:50.430454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 02:21:50.452037 systemd-networkd[1385]: lo: Link UP May 13 02:21:50.452046 systemd-networkd[1385]: lo: Gained carrier May 13 02:21:50.453317 systemd-networkd[1385]: Enumeration completed May 13 02:21:50.453459 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 02:21:50.453655 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 02:21:50.453659 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 02:21:50.454179 systemd[1]: Reached target network.target - Network. May 13 02:21:50.455471 systemd-networkd[1385]: eth0: Link UP May 13 02:21:50.455543 systemd-networkd[1385]: eth0: Gained carrier May 13 02:21:50.455614 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 02:21:50.458705 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 02:21:50.464862 systemd-networkd[1385]: eth0: DHCPv4 address 172.24.4.210/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 02:21:50.468035 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 02:21:50.475675 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 02:21:50.476479 systemd[1]: Reached target sysinit.target - System Initialization. May 13 02:21:50.477022 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 02:21:50.477465 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 02:21:50.480019 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 02:21:50.480516 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 02:21:50.480544 systemd[1]: Reached target paths.target - Path Units. May 13 02:21:50.480983 systemd[1]: Reached target time-set.target - System Time Set. May 13 02:21:50.481562 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 02:21:50.485276 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 02:21:50.485844 systemd[1]: Reached target timers.target - Timer Units. May 13 02:21:50.488346 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 02:21:50.492111 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 02:21:50.498849 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 02:21:50.502363 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 02:21:50.505070 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 02:21:50.514242 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 02:21:50.518431 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 02:21:50.522691 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 02:21:50.524867 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 02:21:50.528457 systemd[1]: Reached target sockets.target - Socket Units. May 13 02:21:50.529128 systemd[1]: Reached target basic.target - Basic System. May 13 02:21:50.529710 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 02:21:50.529748 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 02:21:50.532919 systemd[1]: Starting containerd.service - containerd container runtime... May 13 02:21:50.538011 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 02:21:50.542977 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 02:21:50.550135 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 02:21:50.557062 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 02:21:50.557820 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 02:21:50.562826 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 02:21:50.568927 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 02:21:50.573805 jq[1448]: false May 13 02:21:50.573166 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 02:21:50.582362 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 02:21:50.596153 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 02:21:50.600434 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 02:21:50.603522 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 02:21:50.608037 systemd[1]: Starting update-engine.service - Update Engine... May 13 02:21:50.614015 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 02:21:50.616466 extend-filesystems[1449]: Found loop4 May 13 02:21:50.616466 extend-filesystems[1449]: Found loop5 May 13 02:21:50.616466 extend-filesystems[1449]: Found loop6 May 13 02:21:50.616466 extend-filesystems[1449]: Found loop7 May 13 02:21:50.616466 extend-filesystems[1449]: Found vda May 13 02:21:50.616466 extend-filesystems[1449]: Found vda1 May 13 02:21:50.616466 extend-filesystems[1449]: Found vda2 May 13 02:21:50.616466 extend-filesystems[1449]: Found vda3 May 13 02:21:50.616466 extend-filesystems[1449]: Found usr May 13 02:21:50.616466 extend-filesystems[1449]: Found vda4 May 13 02:21:50.616466 extend-filesystems[1449]: Found vda6 May 13 02:21:50.616466 extend-filesystems[1449]: Found vda7 May 13 02:21:50.616466 extend-filesystems[1449]: Found vda9 May 13 02:21:50.616466 extend-filesystems[1449]: Checking size of /dev/vda9 May 13 02:21:51.214521 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 02:21:51.216864 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 02:21:51.216893 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1289) May 13 02:21:50.624855 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 02:21:51.134152 dbus-daemon[1445]: [system] SELinux support is enabled May 13 02:21:51.220171 extend-filesystems[1449]: Resized partition /dev/vda9 May 13 02:21:50.625399 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 02:21:51.225692 extend-filesystems[1478]: resize2fs 1.47.2 (1-Jan-2025) May 13 02:21:51.225692 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 02:21:51.225692 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 02:21:51.225692 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 02:21:51.239530 update_engine[1456]: I20250513 02:21:51.164357 1456 main.cc:92] Flatcar Update Engine starting May 13 02:21:51.239530 update_engine[1456]: I20250513 02:21:51.189402 1456 update_check_scheduler.cc:74] Next update check in 7m3s May 13 02:21:50.626543 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 02:21:51.239948 extend-filesystems[1449]: Resized filesystem in /dev/vda9 May 13 02:21:51.243105 tar[1464]: linux-amd64/helm May 13 02:21:50.627042 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 02:21:51.243853 jq[1457]: true May 13 02:21:51.084012 systemd-timesyncd[1387]: Contacted time server 23.186.168.127:123 (0.flatcar.pool.ntp.org). May 13 02:21:51.084058 systemd-timesyncd[1387]: Initial clock synchronization to Tue 2025-05-13 02:21:51.083908 UTC. May 13 02:21:51.244338 jq[1479]: true May 13 02:21:51.093651 systemd-resolved[1386]: Clock change detected. Flushing caches. May 13 02:21:51.133165 systemd[1]: motdgen.service: Deactivated successfully. May 13 02:21:51.133605 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 02:21:51.140416 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 02:21:51.156338 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 02:21:51.156363 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 02:21:51.176557 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 02:21:51.176582 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 02:21:51.184008 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 02:21:51.191692 systemd[1]: Started update-engine.service - Update Engine. May 13 02:21:51.202718 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 02:21:51.221838 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 02:21:51.222049 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 02:21:51.364308 bash[1502]: Updated "/home/core/.ssh/authorized_keys" May 13 02:21:51.366138 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 02:21:51.372838 systemd-logind[1455]: New seat seat0. May 13 02:21:51.374671 systemd[1]: Starting sshkeys.service... May 13 02:21:51.376496 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) May 13 02:21:51.376527 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 02:21:51.381707 systemd[1]: Started systemd-logind.service - User Login Management. May 13 02:21:51.430960 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 02:21:51.438989 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 02:21:51.623618 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 02:21:51.656119 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 02:21:51.767216 containerd[1480]: time="2025-05-13T02:21:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 02:21:51.770358 containerd[1480]: time="2025-05-13T02:21:51.770200177Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801265710Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.829µs" May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801308731Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801334099Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801550194Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801569931Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801598555Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801662004Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801679517Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801923855Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801941418Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801955414Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 02:21:51.803889 containerd[1480]: time="2025-05-13T02:21:51.801966324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 02:21:51.804222 containerd[1480]: time="2025-05-13T02:21:51.802043710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 02:21:51.804222 containerd[1480]: time="2025-05-13T02:21:51.802245528Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 02:21:51.804222 containerd[1480]: time="2025-05-13T02:21:51.802277919Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 02:21:51.804222 containerd[1480]: time="2025-05-13T02:21:51.802289601Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 02:21:51.804222 containerd[1480]: time="2025-05-13T02:21:51.802322072Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 02:21:51.807986 containerd[1480]: time="2025-05-13T02:21:51.807964033Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 02:21:51.808658 containerd[1480]: time="2025-05-13T02:21:51.808639480Z" level=info msg="metadata content store policy set" policy=shared May 13 02:21:51.818665 containerd[1480]: time="2025-05-13T02:21:51.818642440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 02:21:51.818892 containerd[1480]: time="2025-05-13T02:21:51.818872682Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 02:21:51.819498 containerd[1480]: time="2025-05-13T02:21:51.819477637Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 02:21:51.819584 containerd[1480]: time="2025-05-13T02:21:51.819568828Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 02:21:51.819665 containerd[1480]: time="2025-05-13T02:21:51.819648848Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 02:21:51.820070 containerd[1480]: time="2025-05-13T02:21:51.820054859Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 02:21:51.820177 containerd[1480]: time="2025-05-13T02:21:51.820159506Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 02:21:51.820260 containerd[1480]: time="2025-05-13T02:21:51.820245237Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 02:21:51.820332 containerd[1480]: time="2025-05-13T02:21:51.820316871Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 02:21:51.820408 containerd[1480]: time="2025-05-13T02:21:51.820393655Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 02:21:51.820490 containerd[1480]: time="2025-05-13T02:21:51.820474036Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 02:21:51.821035 containerd[1480]: time="2025-05-13T02:21:51.821018707Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 02:21:51.821272 containerd[1480]: time="2025-05-13T02:21:51.821242477Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 02:21:51.821566 containerd[1480]: time="2025-05-13T02:21:51.821547158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 02:21:51.821659 containerd[1480]: time="2025-05-13T02:21:51.821643339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 02:21:51.821925 containerd[1480]: time="2025-05-13T02:21:51.821909257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 02:21:51.821999 containerd[1480]: time="2025-05-13T02:21:51.821983737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 02:21:51.822094 containerd[1480]: time="2025-05-13T02:21:51.822077232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 02:21:51.822195 containerd[1480]: time="2025-05-13T02:21:51.822178082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 02:21:51.822321 containerd[1480]: time="2025-05-13T02:21:51.822303857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 02:21:51.822410 containerd[1480]: time="2025-05-13T02:21:51.822394728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 02:21:51.822683 containerd[1480]: time="2025-05-13T02:21:51.822655497Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 02:21:51.822770 containerd[1480]: time="2025-05-13T02:21:51.822754723Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 02:21:51.823026 containerd[1480]: time="2025-05-13T02:21:51.822996006Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 02:21:51.823480 containerd[1480]: time="2025-05-13T02:21:51.823430450Z" level=info msg="Start snapshots syncer" May 13 02:21:51.823557 containerd[1480]: time="2025-05-13T02:21:51.823540437Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 02:21:51.824298 containerd[1480]: time="2025-05-13T02:21:51.823977466Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 02:21:51.824816 containerd[1480]: time="2025-05-13T02:21:51.824798857Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 02:21:51.825350 containerd[1480]: time="2025-05-13T02:21:51.825051290Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 02:21:51.826098 containerd[1480]: time="2025-05-13T02:21:51.826059822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 02:21:51.826482 containerd[1480]: time="2025-05-13T02:21:51.826186990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 02:21:51.826482 containerd[1480]: time="2025-05-13T02:21:51.826213770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 02:21:51.826482 containerd[1480]: time="2025-05-13T02:21:51.826228097Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 02:21:51.826631 containerd[1480]: time="2025-05-13T02:21:51.826612739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 02:21:51.827776 containerd[1480]: time="2025-05-13T02:21:51.826695704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 02:21:51.827776 containerd[1480]: time="2025-05-13T02:21:51.826715762Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 02:21:51.827776 containerd[1480]: time="2025-05-13T02:21:51.826746850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 02:21:51.827894 containerd[1480]: time="2025-05-13T02:21:51.827878222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 02:21:51.827970 containerd[1480]: time="2025-05-13T02:21:51.827955878Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 02:21:51.828088 containerd[1480]: time="2025-05-13T02:21:51.828069962Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 02:21:51.828176 containerd[1480]: time="2025-05-13T02:21:51.828158738Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 02:21:51.828249 containerd[1480]: time="2025-05-13T02:21:51.828235242Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 02:21:51.828327 containerd[1480]: time="2025-05-13T02:21:51.828294503Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 02:21:51.828402 containerd[1480]: time="2025-05-13T02:21:51.828373721Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 02:21:51.828494 containerd[1480]: time="2025-05-13T02:21:51.828449163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 02:21:51.828573 containerd[1480]: time="2025-05-13T02:21:51.828541697Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 02:21:51.828665 containerd[1480]: time="2025-05-13T02:21:51.828651382Z" level=info msg="runtime interface created" May 13 02:21:51.828715 containerd[1480]: time="2025-05-13T02:21:51.828704822Z" level=info msg="created NRI interface" May 13 02:21:51.828787 containerd[1480]: time="2025-05-13T02:21:51.828773521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 02:21:51.828917 containerd[1480]: time="2025-05-13T02:21:51.828902253Z" level=info msg="Connect containerd service" May 13 02:21:51.829130 containerd[1480]: time="2025-05-13T02:21:51.829114140Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 02:21:51.833142 containerd[1480]: time="2025-05-13T02:21:51.832313851Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 02:21:51.943279 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 02:21:51.970025 tar[1464]: linux-amd64/LICENSE May 13 02:21:51.970215 tar[1464]: linux-amd64/README.md May 13 02:21:51.986274 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 02:21:51.991916 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 02:21:51.999729 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 02:21:52.009559 systemd-networkd[1385]: eth0: Gained IPv6LL May 13 02:21:52.013078 systemd[1]: Started sshd@0-172.24.4.210:22-172.24.4.1:58046.service - OpenSSH per-connection server daemon (172.24.4.1:58046). May 13 02:21:52.019573 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 02:21:52.025909 systemd[1]: Reached target network-online.target - Network is Online. May 13 02:21:52.033557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 02:21:52.040141 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 02:21:52.059398 systemd[1]: issuegen.service: Deactivated successfully. May 13 02:21:52.061255 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 02:21:52.071667 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076064135Z" level=info msg="Start subscribing containerd event" May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076118737Z" level=info msg="Start recovering state" May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076206903Z" level=info msg="Start event monitor" May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076224265Z" level=info msg="Start cni network conf syncer for default" May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076232671Z" level=info msg="Start streaming server" May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076247038Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076255754Z" level=info msg="runtime interface starting up..." May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076262126Z" level=info msg="starting plugins..." May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076275060Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076815795Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.076866309Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 02:21:52.079966 containerd[1480]: time="2025-05-13T02:21:52.077036759Z" level=info msg="containerd successfully booted in 0.310211s" May 13 02:21:52.077112 systemd[1]: Started containerd.service - containerd container runtime. May 13 02:21:52.106334 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 02:21:52.109832 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 02:21:52.115334 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 02:21:52.122196 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 02:21:52.124771 systemd[1]: Reached target getty.target - Login Prompts. May 13 02:21:52.969751 sshd[1543]: Accepted publickey for core from 172.24.4.1 port 58046 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:21:52.973326 sshd-session[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:21:52.989879 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 02:21:53.001207 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 02:21:53.023713 systemd-logind[1455]: New session 1 of user core. May 13 02:21:53.034151 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 02:21:53.042931 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 02:21:53.065701 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 02:21:53.070863 systemd-logind[1455]: New session c1 of user core. May 13 02:21:53.281016 systemd[1571]: Queued start job for default target default.target. May 13 02:21:53.285734 systemd[1571]: Created slice app.slice - User Application Slice. May 13 02:21:53.285764 systemd[1571]: Reached target paths.target - Paths. May 13 02:21:53.285803 systemd[1571]: Reached target timers.target - Timers. May 13 02:21:53.290549 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 02:21:53.298800 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 02:21:53.299617 systemd[1571]: Reached target sockets.target - Sockets. May 13 02:21:53.299663 systemd[1571]: Reached target basic.target - Basic System. May 13 02:21:53.299699 systemd[1571]: Reached target default.target - Main User Target. May 13 02:21:53.299724 systemd[1571]: Startup finished in 218ms. May 13 02:21:53.300161 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 02:21:53.314573 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 02:21:53.785599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:21:53.803172 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 02:21:53.817818 systemd[1]: Started sshd@1-172.24.4.210:22-172.24.4.1:35744.service - OpenSSH per-connection server daemon (172.24.4.1:35744). May 13 02:21:55.360747 sshd[1588]: Accepted publickey for core from 172.24.4.1 port 35744 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:21:55.364148 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:21:55.378941 systemd-logind[1455]: New session 2 of user core. May 13 02:21:55.387921 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 02:21:55.467507 kubelet[1586]: E0513 02:21:55.467287 1586 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 02:21:55.472098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 02:21:55.472553 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 02:21:55.473533 systemd[1]: kubelet.service: Consumed 2.072s CPU time, 248.1M memory peak. May 13 02:21:56.097663 sshd[1598]: Connection closed by 172.24.4.1 port 35744 May 13 02:21:56.096393 sshd-session[1588]: pam_unix(sshd:session): session closed for user core May 13 02:21:56.116592 systemd[1]: sshd@1-172.24.4.210:22-172.24.4.1:35744.service: Deactivated successfully. May 13 02:21:56.120114 systemd[1]: session-2.scope: Deactivated successfully. May 13 02:21:56.124877 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. May 13 02:21:56.127655 systemd[1]: Started sshd@2-172.24.4.210:22-172.24.4.1:35756.service - OpenSSH per-connection server daemon (172.24.4.1:35756). May 13 02:21:56.141736 systemd-logind[1455]: Removed session 2. May 13 02:21:57.172808 login[1566]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 02:21:57.186182 systemd-logind[1455]: New session 3 of user core. May 13 02:21:57.195967 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 02:21:57.198989 login[1567]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 02:21:57.213331 systemd-logind[1455]: New session 4 of user core. May 13 02:21:57.229668 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 02:21:57.455285 sshd[1604]: Accepted publickey for core from 172.24.4.1 port 35756 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:21:57.459236 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:21:57.478563 systemd-logind[1455]: New session 5 of user core. May 13 02:21:57.493285 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 02:21:58.086504 coreos-metadata[1444]: May 13 02:21:58.086 WARN failed to locate config-drive, using the metadata service API instead May 13 02:21:58.134620 coreos-metadata[1444]: May 13 02:21:58.134 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 13 02:21:58.194058 sshd[1633]: Connection closed by 172.24.4.1 port 35756 May 13 02:21:58.193880 sshd-session[1604]: pam_unix(sshd:session): session closed for user core May 13 02:21:58.200230 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. May 13 02:21:58.201850 systemd[1]: sshd@2-172.24.4.210:22-172.24.4.1:35756.service: Deactivated successfully. May 13 02:21:58.206814 systemd[1]: session-5.scope: Deactivated successfully. May 13 02:21:58.211632 systemd-logind[1455]: Removed session 5. May 13 02:21:58.485329 coreos-metadata[1444]: May 13 02:21:58.484 INFO Fetch successful May 13 02:21:58.485329 coreos-metadata[1444]: May 13 02:21:58.484 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 02:21:58.499566 coreos-metadata[1444]: May 13 02:21:58.499 INFO Fetch successful May 13 02:21:58.499566 coreos-metadata[1444]: May 13 02:21:58.499 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 13 02:21:58.514675 coreos-metadata[1444]: May 13 02:21:58.514 INFO Fetch successful May 13 02:21:58.514675 coreos-metadata[1444]: May 13 02:21:58.514 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 13 02:21:58.528824 coreos-metadata[1444]: May 13 02:21:58.528 INFO Fetch successful May 13 02:21:58.528824 coreos-metadata[1444]: May 13 02:21:58.528 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 13 02:21:58.539514 coreos-metadata[1444]: May 13 02:21:58.539 INFO Fetch successful May 13 02:21:58.539514 coreos-metadata[1444]: May 13 02:21:58.539 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 13 02:21:58.550909 coreos-metadata[1444]: May 13 02:21:58.550 INFO Fetch successful May 13 02:21:58.576911 coreos-metadata[1506]: May 13 02:21:58.576 WARN failed to locate config-drive, using the metadata service API instead May 13 02:21:58.602668 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 02:21:58.606034 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 02:21:58.620132 coreos-metadata[1506]: May 13 02:21:58.620 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 02:21:58.638391 coreos-metadata[1506]: May 13 02:21:58.638 INFO Fetch successful May 13 02:21:58.638391 coreos-metadata[1506]: May 13 02:21:58.638 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 02:21:58.648764 coreos-metadata[1506]: May 13 02:21:58.648 INFO Fetch successful May 13 02:21:58.654290 unknown[1506]: wrote ssh authorized keys file for user: core May 13 02:21:58.697243 update-ssh-keys[1647]: Updated "/home/core/.ssh/authorized_keys" May 13 02:21:58.699170 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 02:21:58.701869 systemd[1]: Finished sshkeys.service. May 13 02:21:58.708708 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 02:21:58.708982 systemd[1]: Startup finished in 1.196s (kernel) + 17.408s (initrd) + 11.108s (userspace) = 29.713s. May 13 02:22:05.570165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 02:22:05.573683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 02:22:05.917992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:22:05.935498 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 02:22:06.015105 kubelet[1659]: E0513 02:22:06.015060 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 02:22:06.022250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 02:22:06.022617 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 02:22:06.023555 systemd[1]: kubelet.service: Consumed 311ms CPU time, 96.4M memory peak. May 13 02:22:08.213649 systemd[1]: Started sshd@3-172.24.4.210:22-172.24.4.1:56444.service - OpenSSH per-connection server daemon (172.24.4.1:56444). May 13 02:22:09.504485 sshd[1668]: Accepted publickey for core from 172.24.4.1 port 56444 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:22:09.507446 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:22:09.519967 systemd-logind[1455]: New session 6 of user core. May 13 02:22:09.530771 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 02:22:10.141510 sshd[1670]: Connection closed by 172.24.4.1 port 56444 May 13 02:22:10.142604 sshd-session[1668]: pam_unix(sshd:session): session closed for user core May 13 02:22:10.154436 systemd[1]: sshd@3-172.24.4.210:22-172.24.4.1:56444.service: Deactivated successfully. May 13 02:22:10.157739 systemd[1]: session-6.scope: Deactivated successfully. May 13 02:22:10.159726 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. May 13 02:22:10.163913 systemd[1]: Started sshd@4-172.24.4.210:22-172.24.4.1:56456.service - OpenSSH per-connection server daemon (172.24.4.1:56456). May 13 02:22:10.166203 systemd-logind[1455]: Removed session 6. May 13 02:22:11.311359 sshd[1675]: Accepted publickey for core from 172.24.4.1 port 56456 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:22:11.314087 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:22:11.326398 systemd-logind[1455]: New session 7 of user core. May 13 02:22:11.332787 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 02:22:11.958786 sshd[1678]: Connection closed by 172.24.4.1 port 56456 May 13 02:22:11.959700 sshd-session[1675]: pam_unix(sshd:session): session closed for user core May 13 02:22:11.976189 systemd[1]: sshd@4-172.24.4.210:22-172.24.4.1:56456.service: Deactivated successfully. May 13 02:22:11.979510 systemd[1]: session-7.scope: Deactivated successfully. May 13 02:22:11.982787 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. May 13 02:22:11.991072 systemd[1]: Started sshd@5-172.24.4.210:22-172.24.4.1:56460.service - OpenSSH per-connection server daemon (172.24.4.1:56460). May 13 02:22:11.993525 systemd-logind[1455]: Removed session 7. May 13 02:22:13.168838 sshd[1683]: Accepted publickey for core from 172.24.4.1 port 56460 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:22:13.171442 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:22:13.186381 systemd-logind[1455]: New session 8 of user core. May 13 02:22:13.197829 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 02:22:13.909836 sshd[1686]: Connection closed by 172.24.4.1 port 56460 May 13 02:22:13.910910 sshd-session[1683]: pam_unix(sshd:session): session closed for user core May 13 02:22:13.924572 systemd[1]: Started sshd@6-172.24.4.210:22-172.24.4.1:34170.service - OpenSSH per-connection server daemon (172.24.4.1:34170). May 13 02:22:13.926852 systemd[1]: sshd@5-172.24.4.210:22-172.24.4.1:56460.service: Deactivated successfully. May 13 02:22:13.938132 systemd[1]: session-8.scope: Deactivated successfully. May 13 02:22:13.942145 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. May 13 02:22:13.945308 systemd-logind[1455]: Removed session 8. May 13 02:22:15.482510 sshd[1689]: Accepted publickey for core from 172.24.4.1 port 34170 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:22:15.485529 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:22:15.499433 systemd-logind[1455]: New session 9 of user core. May 13 02:22:15.505849 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 02:22:16.002165 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 02:22:16.004134 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 02:22:16.027193 sudo[1695]: pam_unix(sudo:session): session closed for user root May 13 02:22:16.069881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 02:22:16.074995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 02:22:16.204581 sshd[1694]: Connection closed by 172.24.4.1 port 34170 May 13 02:22:16.205762 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 13 02:22:16.225020 systemd[1]: sshd@6-172.24.4.210:22-172.24.4.1:34170.service: Deactivated successfully. May 13 02:22:16.229796 systemd[1]: session-9.scope: Deactivated successfully. May 13 02:22:16.235875 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. May 13 02:22:16.239032 systemd[1]: Started sshd@7-172.24.4.210:22-172.24.4.1:34174.service - OpenSSH per-connection server daemon (172.24.4.1:34174). May 13 02:22:16.247814 systemd-logind[1455]: Removed session 9. May 13 02:22:16.455610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:22:16.470059 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 02:22:16.555812 kubelet[1711]: E0513 02:22:16.555740 1711 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 02:22:16.561035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 02:22:16.561898 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 02:22:16.563088 systemd[1]: kubelet.service: Consumed 288ms CPU time, 95.9M memory peak. May 13 02:22:17.503389 sshd[1703]: Accepted publickey for core from 172.24.4.1 port 34174 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:22:17.506329 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:22:17.517948 systemd-logind[1455]: New session 10 of user core. May 13 02:22:17.526771 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 02:22:17.833779 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 02:22:17.835177 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 02:22:17.843737 sudo[1720]: pam_unix(sudo:session): session closed for user root May 13 02:22:17.855619 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 02:22:17.856271 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 02:22:17.878089 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 02:22:17.954876 augenrules[1742]: No rules May 13 02:22:17.957984 systemd[1]: audit-rules.service: Deactivated successfully. May 13 02:22:17.958449 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 02:22:17.960449 sudo[1719]: pam_unix(sudo:session): session closed for user root May 13 02:22:18.182632 sshd[1718]: Connection closed by 172.24.4.1 port 34174 May 13 02:22:18.183620 sshd-session[1703]: pam_unix(sshd:session): session closed for user core May 13 02:22:18.205679 systemd[1]: sshd@7-172.24.4.210:22-172.24.4.1:34174.service: Deactivated successfully. May 13 02:22:18.209069 systemd[1]: session-10.scope: Deactivated successfully. May 13 02:22:18.212784 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. May 13 02:22:18.216117 systemd[1]: Started sshd@8-172.24.4.210:22-172.24.4.1:34188.service - OpenSSH per-connection server daemon (172.24.4.1:34188). May 13 02:22:18.219684 systemd-logind[1455]: Removed session 10. May 13 02:22:19.216815 sshd[1750]: Accepted publickey for core from 172.24.4.1 port 34188 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:22:19.219654 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:22:19.230660 systemd-logind[1455]: New session 11 of user core. May 13 02:22:19.239761 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 02:22:19.546835 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 02:22:19.547526 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 02:22:20.253901 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 02:22:20.269169 (dockerd)[1771]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 02:22:20.824200 dockerd[1771]: time="2025-05-13T02:22:20.824115708Z" level=info msg="Starting up" May 13 02:22:20.825019 dockerd[1771]: time="2025-05-13T02:22:20.824988515Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 02:22:20.944770 dockerd[1771]: time="2025-05-13T02:22:20.944372961Z" level=info msg="Loading containers: start." May 13 02:22:21.166586 kernel: Initializing XFRM netlink socket May 13 02:22:21.276861 systemd-networkd[1385]: docker0: Link UP May 13 02:22:21.343160 dockerd[1771]: time="2025-05-13T02:22:21.343051923Z" level=info msg="Loading containers: done." May 13 02:22:21.376293 dockerd[1771]: time="2025-05-13T02:22:21.376188531Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 02:22:21.376668 dockerd[1771]: time="2025-05-13T02:22:21.376359932Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 02:22:21.376668 dockerd[1771]: time="2025-05-13T02:22:21.376633806Z" level=info msg="Daemon has completed initialization" May 13 02:22:21.457961 dockerd[1771]: time="2025-05-13T02:22:21.457250021Z" level=info msg="API listen on /run/docker.sock" May 13 02:22:21.457770 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 02:22:23.173621 containerd[1480]: time="2025-05-13T02:22:23.173536446Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 02:22:23.907188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916917309.mount: Deactivated successfully. May 13 02:22:25.927973 containerd[1480]: time="2025-05-13T02:22:25.927888012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:25.929469 containerd[1480]: time="2025-05-13T02:22:25.929212185Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" May 13 02:22:25.930790 containerd[1480]: time="2025-05-13T02:22:25.930726986Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:25.934068 containerd[1480]: time="2025-05-13T02:22:25.933993723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:25.935129 containerd[1480]: time="2025-05-13T02:22:25.934991955Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.761377633s" May 13 02:22:25.935129 containerd[1480]: time="2025-05-13T02:22:25.935025819Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 02:22:25.954954 containerd[1480]: time="2025-05-13T02:22:25.954907317Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 02:22:26.570978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 02:22:26.574855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 02:22:26.736485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:22:26.753755 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 02:22:26.836569 kubelet[2045]: E0513 02:22:26.835992 2045 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 02:22:26.840117 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 02:22:26.840263 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 02:22:26.840637 systemd[1]: kubelet.service: Consumed 196ms CPU time, 94M memory peak. May 13 02:22:28.274048 containerd[1480]: time="2025-05-13T02:22:28.273979456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:28.277046 containerd[1480]: time="2025-05-13T02:22:28.275987963Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" May 13 02:22:28.278678 containerd[1480]: time="2025-05-13T02:22:28.278609439Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:28.290526 containerd[1480]: time="2025-05-13T02:22:28.290324852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:28.291154 containerd[1480]: time="2025-05-13T02:22:28.290922643Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.335821653s" May 13 02:22:28.291154 containerd[1480]: time="2025-05-13T02:22:28.290958019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 02:22:28.318198 containerd[1480]: time="2025-05-13T02:22:28.318160517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 02:22:30.568570 containerd[1480]: time="2025-05-13T02:22:30.568056627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:30.570835 containerd[1480]: time="2025-05-13T02:22:30.570076245Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" May 13 02:22:30.572817 containerd[1480]: time="2025-05-13T02:22:30.572771409Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:30.582164 containerd[1480]: time="2025-05-13T02:22:30.582091910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:30.583766 containerd[1480]: time="2025-05-13T02:22:30.583652347Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.265452144s" May 13 02:22:30.583766 containerd[1480]: time="2025-05-13T02:22:30.583683936Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 02:22:30.620098 containerd[1480]: time="2025-05-13T02:22:30.619900540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 02:22:32.096714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302436184.mount: Deactivated successfully. May 13 02:22:32.578068 containerd[1480]: time="2025-05-13T02:22:32.577987101Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" May 13 02:22:32.578734 containerd[1480]: time="2025-05-13T02:22:32.578670132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:32.580481 containerd[1480]: time="2025-05-13T02:22:32.580401008Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:32.581354 containerd[1480]: time="2025-05-13T02:22:32.581231736Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.961270773s" May 13 02:22:32.581354 containerd[1480]: time="2025-05-13T02:22:32.581263265Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 02:22:32.581701 containerd[1480]: time="2025-05-13T02:22:32.581645542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:32.599913 containerd[1480]: time="2025-05-13T02:22:32.599876695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 02:22:33.211670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177079590.mount: Deactivated successfully. May 13 02:22:34.457136 containerd[1480]: time="2025-05-13T02:22:34.457028292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:34.459940 containerd[1480]: time="2025-05-13T02:22:34.459089458Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 13 02:22:34.461599 containerd[1480]: time="2025-05-13T02:22:34.461499748Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:34.466968 containerd[1480]: time="2025-05-13T02:22:34.466918000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:34.470296 containerd[1480]: time="2025-05-13T02:22:34.470069090Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.870133986s" May 13 02:22:34.470296 containerd[1480]: time="2025-05-13T02:22:34.470173556Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 02:22:34.491669 containerd[1480]: time="2025-05-13T02:22:34.491589201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 02:22:35.062559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3425791116.mount: Deactivated successfully. May 13 02:22:35.074332 containerd[1480]: time="2025-05-13T02:22:35.074246956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:35.075975 containerd[1480]: time="2025-05-13T02:22:35.075879417Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" May 13 02:22:35.077416 containerd[1480]: time="2025-05-13T02:22:35.077272340Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:35.081574 containerd[1480]: time="2025-05-13T02:22:35.081526368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:35.084788 containerd[1480]: time="2025-05-13T02:22:35.084642743Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 592.779989ms" May 13 02:22:35.085304 containerd[1480]: time="2025-05-13T02:22:35.085007718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 02:22:35.115551 containerd[1480]: time="2025-05-13T02:22:35.115436767Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 02:22:35.782535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104076233.mount: Deactivated successfully. May 13 02:22:36.010627 update_engine[1456]: I20250513 02:22:36.010557 1456 update_attempter.cc:509] Updating boot flags... May 13 02:22:36.065526 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2175) May 13 02:22:36.173506 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2178) May 13 02:22:37.070493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 02:22:37.076345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 02:22:37.827160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:22:37.842652 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 02:22:37.917157 kubelet[2222]: E0513 02:22:37.916287 2222 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 02:22:37.918335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 02:22:37.918555 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 02:22:37.919120 systemd[1]: kubelet.service: Consumed 219ms CPU time, 96.4M memory peak. May 13 02:22:39.365449 containerd[1480]: time="2025-05-13T02:22:39.365075833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:39.371287 containerd[1480]: time="2025-05-13T02:22:39.371118091Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" May 13 02:22:39.379652 containerd[1480]: time="2025-05-13T02:22:39.379512389Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:39.467381 containerd[1480]: time="2025-05-13T02:22:39.465388019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:22:39.468708 containerd[1480]: time="2025-05-13T02:22:39.468209680Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.352654669s" May 13 02:22:39.468708 containerd[1480]: time="2025-05-13T02:22:39.468380978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 02:22:44.208401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:22:44.208930 systemd[1]: kubelet.service: Consumed 219ms CPU time, 96.4M memory peak. May 13 02:22:44.214063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 02:22:44.260670 systemd[1]: Reload requested from client PID 2314 ('systemctl') (unit session-11.scope)... May 13 02:22:44.260742 systemd[1]: Reloading... May 13 02:22:44.356495 zram_generator::config[2356]: No configuration found. May 13 02:22:44.522791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 02:22:44.643770 systemd[1]: Reloading finished in 381 ms. May 13 02:22:44.702438 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 02:22:44.702691 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 02:22:44.703033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:22:44.703180 systemd[1]: kubelet.service: Consumed 95ms CPU time, 83.6M memory peak. May 13 02:22:44.705077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 02:22:44.835306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:22:44.845724 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 02:22:44.896910 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 02:22:44.896910 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 02:22:44.896910 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 02:22:44.897791 kubelet[2426]: I0513 02:22:44.896938 2426 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 02:22:45.681563 kubelet[2426]: I0513 02:22:45.681449 2426 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 02:22:45.681563 kubelet[2426]: I0513 02:22:45.681503 2426 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 02:22:45.682530 kubelet[2426]: I0513 02:22:45.682033 2426 server.go:927] "Client rotation is on, will bootstrap in background" May 13 02:22:45.908282 kubelet[2426]: I0513 02:22:45.908227 2426 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 02:22:45.910145 kubelet[2426]: E0513 02:22:45.909993 2426 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:45.939308 kubelet[2426]: I0513 02:22:45.938967 2426 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 02:22:45.939505 kubelet[2426]: I0513 02:22:45.939414 2426 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 02:22:45.939956 kubelet[2426]: I0513 02:22:45.939523 2426 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-0dbb4c7115.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 02:22:45.939956 kubelet[2426]: I0513 02:22:45.939951 2426 topology_manager.go:138] "Creating topology manager with none policy" May 13 02:22:45.940273 kubelet[2426]: I0513 02:22:45.939979 2426 container_manager_linux.go:301] "Creating device plugin manager" May 13 02:22:45.940273 kubelet[2426]: I0513 02:22:45.940216 2426 state_mem.go:36] "Initialized new in-memory state store" May 13 02:22:45.943176 kubelet[2426]: I0513 02:22:45.943116 2426 kubelet.go:400] "Attempting to sync node with API server" May 13 02:22:45.943176 kubelet[2426]: I0513 02:22:45.943165 2426 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 02:22:45.943534 kubelet[2426]: I0513 02:22:45.943214 2426 kubelet.go:312] "Adding apiserver pod source" May 13 02:22:45.943534 kubelet[2426]: I0513 02:22:45.943237 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 02:22:45.956640 kubelet[2426]: W0513 02:22:45.955256 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:45.956640 kubelet[2426]: E0513 02:22:45.955391 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:45.956640 kubelet[2426]: W0513 02:22:45.955978 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-0dbb4c7115.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:45.956640 kubelet[2426]: E0513 02:22:45.956059 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-0dbb4c7115.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:45.956950 kubelet[2426]: I0513 02:22:45.956724 2426 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 02:22:45.961482 kubelet[2426]: I0513 02:22:45.960106 2426 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 02:22:45.961482 kubelet[2426]: W0513 02:22:45.960220 2426 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 02:22:45.961482 kubelet[2426]: I0513 02:22:45.961437 2426 server.go:1264] "Started kubelet" May 13 02:22:45.965723 kubelet[2426]: I0513 02:22:45.965673 2426 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 02:22:45.981361 kubelet[2426]: I0513 02:22:45.981249 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 02:22:45.982039 kubelet[2426]: I0513 02:22:45.981983 2426 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 02:22:45.983521 kubelet[2426]: I0513 02:22:45.983052 2426 server.go:455] "Adding debug handlers to kubelet server" May 13 02:22:45.986940 kubelet[2426]: E0513 02:22:45.986675 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.210:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.210:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-0dbb4c7115.novalocal.183ef4e418291639 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-0dbb4c7115.novalocal,UID:ci-4284-0-0-n-0dbb4c7115.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-0dbb4c7115.novalocal,},FirstTimestamp:2025-05-13 02:22:45.961397817 +0000 UTC m=+1.110867288,LastTimestamp:2025-05-13 02:22:45.961397817 +0000 UTC m=+1.110867288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-0dbb4c7115.novalocal,}" May 13 02:22:45.991550 kubelet[2426]: I0513 02:22:45.991522 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 02:22:45.998212 kubelet[2426]: I0513 02:22:45.998179 2426 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 02:22:45.999320 kubelet[2426]: I0513 02:22:45.999284 2426 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 02:22:45.999968 kubelet[2426]: I0513 02:22:45.999650 2426 reconciler.go:26] "Reconciler: start to sync state" May 13 02:22:46.002573 kubelet[2426]: W0513 02:22:46.001633 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:46.002573 kubelet[2426]: E0513 02:22:46.001744 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:46.002573 kubelet[2426]: E0513 02:22:46.001872 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-0dbb4c7115.novalocal?timeout=10s\": dial tcp 172.24.4.210:6443: connect: connection refused" interval="200ms" May 13 02:22:46.005601 kubelet[2426]: I0513 02:22:46.005557 2426 factory.go:221] Registration of the containerd container factory successfully May 13 02:22:46.005601 kubelet[2426]: I0513 02:22:46.005589 2426 factory.go:221] Registration of the systemd container factory successfully May 13 02:22:46.005800 kubelet[2426]: I0513 02:22:46.005688 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 02:22:46.026193 kubelet[2426]: I0513 02:22:46.026133 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 02:22:46.027931 kubelet[2426]: I0513 02:22:46.027661 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 02:22:46.027931 kubelet[2426]: I0513 02:22:46.027694 2426 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 02:22:46.027931 kubelet[2426]: I0513 02:22:46.027709 2426 kubelet.go:2337] "Starting kubelet main sync loop" May 13 02:22:46.027931 kubelet[2426]: E0513 02:22:46.027751 2426 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 02:22:46.030291 kubelet[2426]: W0513 02:22:46.030099 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:46.030291 kubelet[2426]: E0513 02:22:46.030261 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:46.030291 kubelet[2426]: I0513 02:22:46.030270 2426 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 02:22:46.030291 kubelet[2426]: I0513 02:22:46.030280 2426 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 02:22:46.030291 kubelet[2426]: I0513 02:22:46.030295 2426 state_mem.go:36] "Initialized new in-memory state store" May 13 02:22:46.036522 kubelet[2426]: I0513 02:22:46.036495 2426 policy_none.go:49] "None policy: Start" May 13 02:22:46.037104 kubelet[2426]: I0513 02:22:46.037073 2426 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 02:22:46.037104 kubelet[2426]: I0513 02:22:46.037094 2426 state_mem.go:35] "Initializing new in-memory state store" May 13 02:22:46.044619 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 02:22:46.070112 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 02:22:46.075723 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 02:22:46.080603 kubelet[2426]: I0513 02:22:46.080417 2426 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 02:22:46.080718 kubelet[2426]: I0513 02:22:46.080620 2426 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 02:22:46.080781 kubelet[2426]: I0513 02:22:46.080719 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 02:22:46.082777 kubelet[2426]: E0513 02:22:46.082619 2426 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:46.102409 kubelet[2426]: I0513 02:22:46.102150 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.102698 kubelet[2426]: E0513 02:22:46.102649 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.210:6443/api/v1/nodes\": dial tcp 172.24.4.210:6443: connect: connection refused" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.127860 kubelet[2426]: I0513 02:22:46.127831 2426 topology_manager.go:215] "Topology Admit Handler" podUID="f0ba7153bba36944eb24949ffb7a5224" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.129865 kubelet[2426]: I0513 02:22:46.129707 2426 topology_manager.go:215] "Topology Admit Handler" podUID="f8f6f3a1b57651653db6d8bcf274479b" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.132312 kubelet[2426]: I0513 02:22:46.132246 2426 topology_manager.go:215] "Topology Admit Handler" podUID="5e18128177fc1c91443fc4a28edb5e84" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.142739 systemd[1]: Created slice kubepods-burstable-podf0ba7153bba36944eb24949ffb7a5224.slice - libcontainer container kubepods-burstable-podf0ba7153bba36944eb24949ffb7a5224.slice. May 13 02:22:46.160812 systemd[1]: Created slice kubepods-burstable-podf8f6f3a1b57651653db6d8bcf274479b.slice - libcontainer container kubepods-burstable-podf8f6f3a1b57651653db6d8bcf274479b.slice. May 13 02:22:46.166935 systemd[1]: Created slice kubepods-burstable-pod5e18128177fc1c91443fc4a28edb5e84.slice - libcontainer container kubepods-burstable-pod5e18128177fc1c91443fc4a28edb5e84.slice. May 13 02:22:46.203217 kubelet[2426]: E0513 02:22:46.202987 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-0dbb4c7115.novalocal?timeout=10s\": dial tcp 172.24.4.210:6443: connect: connection refused" interval="400ms" May 13 02:22:46.301893 kubelet[2426]: I0513 02:22:46.301746 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8f6f3a1b57651653db6d8bcf274479b-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"f8f6f3a1b57651653db6d8bcf274479b\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.301893 kubelet[2426]: I0513 02:22:46.301849 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8f6f3a1b57651653db6d8bcf274479b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"f8f6f3a1b57651653db6d8bcf274479b\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.301893 kubelet[2426]: I0513 02:22:46.301898 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.302248 kubelet[2426]: I0513 02:22:46.301941 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.302248 kubelet[2426]: I0513 02:22:46.301988 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.302248 kubelet[2426]: I0513 02:22:46.302033 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0ba7153bba36944eb24949ffb7a5224-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"f0ba7153bba36944eb24949ffb7a5224\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.302248 kubelet[2426]: I0513 02:22:46.302072 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8f6f3a1b57651653db6d8bcf274479b-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"f8f6f3a1b57651653db6d8bcf274479b\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.302554 kubelet[2426]: I0513 02:22:46.302113 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.302554 kubelet[2426]: I0513 02:22:46.302166 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.306429 kubelet[2426]: I0513 02:22:46.306291 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.307012 kubelet[2426]: E0513 02:22:46.306934 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.210:6443/api/v1/nodes\": dial tcp 172.24.4.210:6443: connect: connection refused" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.458934 containerd[1480]: time="2025-05-13T02:22:46.458761545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-0dbb4c7115.novalocal,Uid:f0ba7153bba36944eb24949ffb7a5224,Namespace:kube-system,Attempt:0,}" May 13 02:22:46.465906 containerd[1480]: time="2025-05-13T02:22:46.465739349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal,Uid:f8f6f3a1b57651653db6d8bcf274479b,Namespace:kube-system,Attempt:0,}" May 13 02:22:46.471299 containerd[1480]: time="2025-05-13T02:22:46.471070943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal,Uid:5e18128177fc1c91443fc4a28edb5e84,Namespace:kube-system,Attempt:0,}" May 13 02:22:46.604953 kubelet[2426]: E0513 02:22:46.604846 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-0dbb4c7115.novalocal?timeout=10s\": dial tcp 172.24.4.210:6443: connect: connection refused" interval="800ms" May 13 02:22:46.711772 kubelet[2426]: I0513 02:22:46.711004 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:46.711772 kubelet[2426]: E0513 02:22:46.711588 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.210:6443/api/v1/nodes\": dial tcp 172.24.4.210:6443: connect: connection refused" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:47.084199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846369935.mount: Deactivated successfully. May 13 02:22:47.088098 containerd[1480]: time="2025-05-13T02:22:47.087671256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 02:22:47.093843 containerd[1480]: time="2025-05-13T02:22:47.093552034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 13 02:22:47.095818 containerd[1480]: time="2025-05-13T02:22:47.095397291Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 02:22:47.098843 containerd[1480]: time="2025-05-13T02:22:47.098656867Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 02:22:47.101914 containerd[1480]: time="2025-05-13T02:22:47.101596905Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 02:22:47.103916 containerd[1480]: time="2025-05-13T02:22:47.103788581Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 02:22:47.105653 containerd[1480]: time="2025-05-13T02:22:47.105538898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 02:22:47.106797 containerd[1480]: time="2025-05-13T02:22:47.106651473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 02:22:47.109832 containerd[1480]: time="2025-05-13T02:22:47.108709755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 644.518039ms" May 13 02:22:47.119931 containerd[1480]: time="2025-05-13T02:22:47.119873514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 650.23705ms" May 13 02:22:47.121590 containerd[1480]: time="2025-05-13T02:22:47.121084596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 641.939494ms" May 13 02:22:47.150828 containerd[1480]: time="2025-05-13T02:22:47.150749481Z" level=info msg="connecting to shim 7c6a2e09ab26fb16ad6435ed5cd39807e4fa50092bf43476c239d39e1f6f156d" address="unix:///run/containerd/s/29c5e9b5b510772f8513f21f0e0018eb5f1f94bb6140e9644515c8a85d2fba6a" namespace=k8s.io protocol=ttrpc version=3 May 13 02:22:47.179406 containerd[1480]: time="2025-05-13T02:22:47.179352287Z" level=info msg="connecting to shim ed9e4b23d0015bf51ed9c5d8d1b51511bbf84c3ba200e5c140a39c80b011ae4c" address="unix:///run/containerd/s/aaf570191bbfa8a05faf1fd791f3290db9f5f7195dfac016b62630e589122e5b" namespace=k8s.io protocol=ttrpc version=3 May 13 02:22:47.189972 containerd[1480]: time="2025-05-13T02:22:47.189916136Z" level=info msg="connecting to shim eb55ac0c8c750d16b2332db45cd6b9188731fff546db2ca13ee6228b020b2227" address="unix:///run/containerd/s/55ba7827f2fb709a54b65fcd3a3c06b74cf6dc214f696678fed70684e0aec7d4" namespace=k8s.io protocol=ttrpc version=3 May 13 02:22:47.199134 kubelet[2426]: W0513 02:22:47.199093 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:47.199720 kubelet[2426]: E0513 02:22:47.199628 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:47.207624 systemd[1]: Started cri-containerd-7c6a2e09ab26fb16ad6435ed5cd39807e4fa50092bf43476c239d39e1f6f156d.scope - libcontainer container 7c6a2e09ab26fb16ad6435ed5cd39807e4fa50092bf43476c239d39e1f6f156d. May 13 02:22:47.213805 systemd[1]: Started cri-containerd-ed9e4b23d0015bf51ed9c5d8d1b51511bbf84c3ba200e5c140a39c80b011ae4c.scope - libcontainer container ed9e4b23d0015bf51ed9c5d8d1b51511bbf84c3ba200e5c140a39c80b011ae4c. May 13 02:22:47.230941 kubelet[2426]: W0513 02:22:47.230776 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:47.230941 kubelet[2426]: E0513 02:22:47.230817 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:47.237623 systemd[1]: Started cri-containerd-eb55ac0c8c750d16b2332db45cd6b9188731fff546db2ca13ee6228b020b2227.scope - libcontainer container eb55ac0c8c750d16b2332db45cd6b9188731fff546db2ca13ee6228b020b2227. May 13 02:22:47.299724 containerd[1480]: time="2025-05-13T02:22:47.299506837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal,Uid:f8f6f3a1b57651653db6d8bcf274479b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed9e4b23d0015bf51ed9c5d8d1b51511bbf84c3ba200e5c140a39c80b011ae4c\"" May 13 02:22:47.303382 containerd[1480]: time="2025-05-13T02:22:47.303144651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-0dbb4c7115.novalocal,Uid:f0ba7153bba36944eb24949ffb7a5224,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c6a2e09ab26fb16ad6435ed5cd39807e4fa50092bf43476c239d39e1f6f156d\"" May 13 02:22:47.305773 containerd[1480]: time="2025-05-13T02:22:47.305187104Z" level=info msg="CreateContainer within sandbox \"ed9e4b23d0015bf51ed9c5d8d1b51511bbf84c3ba200e5c140a39c80b011ae4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 02:22:47.307140 containerd[1480]: time="2025-05-13T02:22:47.306968219Z" level=info msg="CreateContainer within sandbox \"7c6a2e09ab26fb16ad6435ed5cd39807e4fa50092bf43476c239d39e1f6f156d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 02:22:47.322104 containerd[1480]: time="2025-05-13T02:22:47.322062950Z" level=info msg="Container ed0d45d5b5bee78fe864dd1f0276706e9d6713c0d13ad2f2a12c239336933c1c: CDI devices from CRI Config.CDIDevices: []" May 13 02:22:47.324426 containerd[1480]: time="2025-05-13T02:22:47.324400363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal,Uid:5e18128177fc1c91443fc4a28edb5e84,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb55ac0c8c750d16b2332db45cd6b9188731fff546db2ca13ee6228b020b2227\"" May 13 02:22:47.328364 containerd[1480]: time="2025-05-13T02:22:47.328325002Z" level=info msg="Container b27317a2ecd87c314a5c7267346c73e3c18dc588107e43854f37a4edf24cb743: CDI devices from CRI Config.CDIDevices: []" May 13 02:22:47.328505 containerd[1480]: time="2025-05-13T02:22:47.328335012Z" level=info msg="CreateContainer within sandbox \"eb55ac0c8c750d16b2332db45cd6b9188731fff546db2ca13ee6228b020b2227\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 02:22:47.331979 kubelet[2426]: W0513 02:22:47.331931 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:47.332510 kubelet[2426]: E0513 02:22:47.331989 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:47.345209 containerd[1480]: time="2025-05-13T02:22:47.345097652Z" level=info msg="CreateContainer within sandbox \"ed9e4b23d0015bf51ed9c5d8d1b51511bbf84c3ba200e5c140a39c80b011ae4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed0d45d5b5bee78fe864dd1f0276706e9d6713c0d13ad2f2a12c239336933c1c\"" May 13 02:22:47.352592 containerd[1480]: time="2025-05-13T02:22:47.352561529Z" level=info msg="StartContainer for \"ed0d45d5b5bee78fe864dd1f0276706e9d6713c0d13ad2f2a12c239336933c1c\"" May 13 02:22:47.353714 containerd[1480]: time="2025-05-13T02:22:47.353660358Z" level=info msg="connecting to shim ed0d45d5b5bee78fe864dd1f0276706e9d6713c0d13ad2f2a12c239336933c1c" address="unix:///run/containerd/s/aaf570191bbfa8a05faf1fd791f3290db9f5f7195dfac016b62630e589122e5b" protocol=ttrpc version=3 May 13 02:22:47.355003 containerd[1480]: time="2025-05-13T02:22:47.354971260Z" level=info msg="CreateContainer within sandbox \"7c6a2e09ab26fb16ad6435ed5cd39807e4fa50092bf43476c239d39e1f6f156d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b27317a2ecd87c314a5c7267346c73e3c18dc588107e43854f37a4edf24cb743\"" May 13 02:22:47.355387 containerd[1480]: time="2025-05-13T02:22:47.355360540Z" level=info msg="StartContainer for \"b27317a2ecd87c314a5c7267346c73e3c18dc588107e43854f37a4edf24cb743\"" May 13 02:22:47.356254 containerd[1480]: time="2025-05-13T02:22:47.356214282Z" level=info msg="Container 758db6908d3e363af7d390cf92487dfeacb0dcc0021d5c9705d5c08aa2238c10: CDI devices from CRI Config.CDIDevices: []" May 13 02:22:47.356543 containerd[1480]: time="2025-05-13T02:22:47.356516968Z" level=info msg="connecting to shim b27317a2ecd87c314a5c7267346c73e3c18dc588107e43854f37a4edf24cb743" address="unix:///run/containerd/s/29c5e9b5b510772f8513f21f0e0018eb5f1f94bb6140e9644515c8a85d2fba6a" protocol=ttrpc version=3 May 13 02:22:47.374432 containerd[1480]: time="2025-05-13T02:22:47.374322732Z" level=info msg="CreateContainer within sandbox \"eb55ac0c8c750d16b2332db45cd6b9188731fff546db2ca13ee6228b020b2227\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"758db6908d3e363af7d390cf92487dfeacb0dcc0021d5c9705d5c08aa2238c10\"" May 13 02:22:47.377351 containerd[1480]: time="2025-05-13T02:22:47.376187346Z" level=info msg="StartContainer for \"758db6908d3e363af7d390cf92487dfeacb0dcc0021d5c9705d5c08aa2238c10\"" May 13 02:22:47.377351 containerd[1480]: time="2025-05-13T02:22:47.377294881Z" level=info msg="connecting to shim 758db6908d3e363af7d390cf92487dfeacb0dcc0021d5c9705d5c08aa2238c10" address="unix:///run/containerd/s/55ba7827f2fb709a54b65fcd3a3c06b74cf6dc214f696678fed70684e0aec7d4" protocol=ttrpc version=3 May 13 02:22:47.381768 systemd[1]: Started cri-containerd-ed0d45d5b5bee78fe864dd1f0276706e9d6713c0d13ad2f2a12c239336933c1c.scope - libcontainer container ed0d45d5b5bee78fe864dd1f0276706e9d6713c0d13ad2f2a12c239336933c1c. May 13 02:22:47.389650 systemd[1]: Started cri-containerd-b27317a2ecd87c314a5c7267346c73e3c18dc588107e43854f37a4edf24cb743.scope - libcontainer container b27317a2ecd87c314a5c7267346c73e3c18dc588107e43854f37a4edf24cb743. May 13 02:22:47.400591 systemd[1]: Started cri-containerd-758db6908d3e363af7d390cf92487dfeacb0dcc0021d5c9705d5c08aa2238c10.scope - libcontainer container 758db6908d3e363af7d390cf92487dfeacb0dcc0021d5c9705d5c08aa2238c10. May 13 02:22:47.407006 kubelet[2426]: E0513 02:22:47.406928 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-0dbb4c7115.novalocal?timeout=10s\": dial tcp 172.24.4.210:6443: connect: connection refused" interval="1.6s" May 13 02:22:47.472563 containerd[1480]: time="2025-05-13T02:22:47.472512974Z" level=info msg="StartContainer for \"ed0d45d5b5bee78fe864dd1f0276706e9d6713c0d13ad2f2a12c239336933c1c\" returns successfully" May 13 02:22:47.480782 kubelet[2426]: W0513 02:22:47.480389 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-0dbb4c7115.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:47.480782 kubelet[2426]: E0513 02:22:47.480747 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-0dbb4c7115.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.210:6443: connect: connection refused May 13 02:22:47.493938 containerd[1480]: time="2025-05-13T02:22:47.493718580Z" level=info msg="StartContainer for \"b27317a2ecd87c314a5c7267346c73e3c18dc588107e43854f37a4edf24cb743\" returns successfully" May 13 02:22:47.499056 containerd[1480]: time="2025-05-13T02:22:47.499022982Z" level=info msg="StartContainer for \"758db6908d3e363af7d390cf92487dfeacb0dcc0021d5c9705d5c08aa2238c10\" returns successfully" May 13 02:22:47.514164 kubelet[2426]: I0513 02:22:47.514129 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:47.514481 kubelet[2426]: E0513 02:22:47.514429 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.210:6443/api/v1/nodes\": dial tcp 172.24.4.210:6443: connect: connection refused" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:49.120335 kubelet[2426]: I0513 02:22:49.120133 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:49.218959 kubelet[2426]: E0513 02:22:49.218892 2426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:49.309739 kubelet[2426]: I0513 02:22:49.309704 2426 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:49.318989 kubelet[2426]: E0513 02:22:49.318925 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:49.419945 kubelet[2426]: E0513 02:22:49.419442 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:49.520493 kubelet[2426]: E0513 02:22:49.520404 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:49.620638 kubelet[2426]: E0513 02:22:49.620567 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:49.721517 kubelet[2426]: E0513 02:22:49.721281 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:49.821854 kubelet[2426]: E0513 02:22:49.821782 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:49.922375 kubelet[2426]: E0513 02:22:49.922300 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:50.023374 kubelet[2426]: E0513 02:22:50.023197 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:50.124339 kubelet[2426]: E0513 02:22:50.124230 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:50.225176 kubelet[2426]: E0513 02:22:50.225105 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:50.326063 kubelet[2426]: E0513 02:22:50.325986 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:50.427314 kubelet[2426]: E0513 02:22:50.427191 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:50.527912 kubelet[2426]: E0513 02:22:50.527675 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" not found" May 13 02:22:50.953388 kubelet[2426]: I0513 02:22:50.953331 2426 apiserver.go:52] "Watching apiserver" May 13 02:22:51.000172 kubelet[2426]: I0513 02:22:51.000104 2426 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 02:22:51.557728 systemd[1]: Reload requested from client PID 2703 ('systemctl') (unit session-11.scope)... May 13 02:22:51.557788 systemd[1]: Reloading... May 13 02:22:51.705499 zram_generator::config[2754]: No configuration found. May 13 02:22:51.842441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 02:22:51.989028 systemd[1]: Reloading finished in 430 ms. May 13 02:22:52.021563 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 02:22:52.035268 systemd[1]: kubelet.service: Deactivated successfully. May 13 02:22:52.035551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:22:52.035613 systemd[1]: kubelet.service: Consumed 1.407s CPU time, 116M memory peak. May 13 02:22:52.041036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 02:22:52.178788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 02:22:52.188749 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 02:22:52.239872 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 02:22:52.239872 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 02:22:52.239872 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 02:22:52.240264 kubelet[2813]: I0513 02:22:52.239929 2813 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 02:22:52.244119 kubelet[2813]: I0513 02:22:52.244090 2813 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 02:22:52.244119 kubelet[2813]: I0513 02:22:52.244110 2813 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 02:22:52.244304 kubelet[2813]: I0513 02:22:52.244278 2813 server.go:927] "Client rotation is on, will bootstrap in background" May 13 02:22:52.245765 kubelet[2813]: I0513 02:22:52.245748 2813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 02:22:52.247284 kubelet[2813]: I0513 02:22:52.246818 2813 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 02:22:52.255457 kubelet[2813]: I0513 02:22:52.255425 2813 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 02:22:52.255674 kubelet[2813]: I0513 02:22:52.255635 2813 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 02:22:52.255858 kubelet[2813]: I0513 02:22:52.255668 2813 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-0dbb4c7115.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 02:22:52.255959 kubelet[2813]: I0513 02:22:52.255859 2813 topology_manager.go:138] "Creating topology manager with none policy" May 13 02:22:52.255959 kubelet[2813]: I0513 02:22:52.255871 2813 container_manager_linux.go:301] "Creating device plugin manager" May 13 02:22:52.255959 kubelet[2813]: I0513 02:22:52.255908 2813 state_mem.go:36] "Initialized new in-memory state store" May 13 02:22:52.256038 kubelet[2813]: I0513 02:22:52.255992 2813 kubelet.go:400] "Attempting to sync node with API server" May 13 02:22:52.256038 kubelet[2813]: I0513 02:22:52.256007 2813 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 02:22:52.256038 kubelet[2813]: I0513 02:22:52.256025 2813 kubelet.go:312] "Adding apiserver pod source" May 13 02:22:52.257058 kubelet[2813]: I0513 02:22:52.256040 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 02:22:52.259784 kubelet[2813]: I0513 02:22:52.259769 2813 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 02:22:52.260535 kubelet[2813]: I0513 02:22:52.260012 2813 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 02:22:52.260535 kubelet[2813]: I0513 02:22:52.260414 2813 server.go:1264] "Started kubelet" May 13 02:22:52.262273 kubelet[2813]: I0513 02:22:52.262259 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 02:22:52.266748 kubelet[2813]: I0513 02:22:52.266701 2813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 02:22:52.268763 kubelet[2813]: I0513 02:22:52.268744 2813 server.go:455] "Adding debug handlers to kubelet server" May 13 02:22:52.269755 kubelet[2813]: I0513 02:22:52.269716 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 02:22:52.269977 kubelet[2813]: I0513 02:22:52.269964 2813 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 02:22:52.271397 kubelet[2813]: I0513 02:22:52.271383 2813 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 02:22:52.273255 kubelet[2813]: I0513 02:22:52.273242 2813 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 02:22:52.273444 kubelet[2813]: I0513 02:22:52.273434 2813 reconciler.go:26] "Reconciler: start to sync state" May 13 02:22:52.275195 kubelet[2813]: I0513 02:22:52.275171 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 02:22:52.276355 kubelet[2813]: I0513 02:22:52.276088 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 02:22:52.276355 kubelet[2813]: I0513 02:22:52.276113 2813 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 02:22:52.276355 kubelet[2813]: I0513 02:22:52.276126 2813 kubelet.go:2337] "Starting kubelet main sync loop" May 13 02:22:52.276355 kubelet[2813]: E0513 02:22:52.276161 2813 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 02:22:52.290248 kubelet[2813]: I0513 02:22:52.290206 2813 factory.go:221] Registration of the systemd container factory successfully May 13 02:22:52.291416 kubelet[2813]: I0513 02:22:52.291350 2813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 02:22:52.296515 kubelet[2813]: E0513 02:22:52.296135 2813 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 02:22:52.296515 kubelet[2813]: I0513 02:22:52.296139 2813 factory.go:221] Registration of the containerd container factory successfully May 13 02:22:52.338167 kubelet[2813]: I0513 02:22:52.338142 2813 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 02:22:52.338167 kubelet[2813]: I0513 02:22:52.338160 2813 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 02:22:52.338167 kubelet[2813]: I0513 02:22:52.338177 2813 state_mem.go:36] "Initialized new in-memory state store" May 13 02:22:52.338357 kubelet[2813]: I0513 02:22:52.338317 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 02:22:52.338357 kubelet[2813]: I0513 02:22:52.338329 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 02:22:52.338357 kubelet[2813]: I0513 02:22:52.338348 2813 policy_none.go:49] "None policy: Start" May 13 02:22:52.339587 kubelet[2813]: I0513 02:22:52.339021 2813 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 02:22:52.339587 kubelet[2813]: I0513 02:22:52.339049 2813 state_mem.go:35] "Initializing new in-memory state store" May 13 02:22:52.339587 kubelet[2813]: I0513 02:22:52.339199 2813 state_mem.go:75] "Updated machine memory state" May 13 02:22:52.343791 kubelet[2813]: I0513 02:22:52.343769 2813 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 02:22:52.343969 kubelet[2813]: I0513 02:22:52.343931 2813 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 02:22:52.344044 kubelet[2813]: I0513 02:22:52.344028 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 02:22:52.377687 kubelet[2813]: I0513 02:22:52.377631 2813 topology_manager.go:215] "Topology Admit Handler" podUID="f8f6f3a1b57651653db6d8bcf274479b" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.378122 kubelet[2813]: I0513 02:22:52.378106 2813 topology_manager.go:215] "Topology Admit Handler" podUID="5e18128177fc1c91443fc4a28edb5e84" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.378270 kubelet[2813]: I0513 02:22:52.378254 2813 topology_manager.go:215] "Topology Admit Handler" podUID="f0ba7153bba36944eb24949ffb7a5224" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.396523 kubelet[2813]: I0513 02:22:52.396487 2813 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.468561 kubelet[2813]: W0513 02:22:52.467859 2813 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 02:22:52.472503 kubelet[2813]: I0513 02:22:52.472423 2813 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.472620 kubelet[2813]: I0513 02:22:52.472579 2813 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.480771 kubelet[2813]: W0513 02:22:52.480524 2813 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 02:22:52.480771 kubelet[2813]: W0513 02:22:52.480607 2813 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 02:22:52.537517 sudo[2844]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 02:22:52.538064 sudo[2844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 02:22:52.574647 kubelet[2813]: I0513 02:22:52.574502 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.574647 kubelet[2813]: I0513 02:22:52.574595 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.574647 kubelet[2813]: I0513 02:22:52.574639 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.575291 kubelet[2813]: I0513 02:22:52.574677 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8f6f3a1b57651653db6d8bcf274479b-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"f8f6f3a1b57651653db6d8bcf274479b\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.575291 kubelet[2813]: I0513 02:22:52.574711 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8f6f3a1b57651653db6d8bcf274479b-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"f8f6f3a1b57651653db6d8bcf274479b\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.575291 kubelet[2813]: I0513 02:22:52.574750 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8f6f3a1b57651653db6d8bcf274479b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"f8f6f3a1b57651653db6d8bcf274479b\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.575291 kubelet[2813]: I0513 02:22:52.574788 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.575396 kubelet[2813]: I0513 02:22:52.574824 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e18128177fc1c91443fc4a28edb5e84-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"5e18128177fc1c91443fc4a28edb5e84\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:52.575396 kubelet[2813]: I0513 02:22:52.574859 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0ba7153bba36944eb24949ffb7a5224-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-0dbb4c7115.novalocal\" (UID: \"f0ba7153bba36944eb24949ffb7a5224\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:53.104501 sudo[2844]: pam_unix(sudo:session): session closed for user root May 13 02:22:53.256776 kubelet[2813]: I0513 02:22:53.256738 2813 apiserver.go:52] "Watching apiserver" May 13 02:22:53.273563 kubelet[2813]: I0513 02:22:53.273513 2813 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 02:22:53.330287 kubelet[2813]: W0513 02:22:53.329803 2813 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 02:22:53.330287 kubelet[2813]: E0513 02:22:53.329869 2813 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" May 13 02:22:53.359293 kubelet[2813]: I0513 02:22:53.358888 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-0dbb4c7115.novalocal" podStartSLOduration=1.3588710769999999 podStartE2EDuration="1.358871077s" podCreationTimestamp="2025-05-13 02:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 02:22:53.347509875 +0000 UTC m=+1.154627499" watchObservedRunningTime="2025-05-13 02:22:53.358871077 +0000 UTC m=+1.165988711" May 13 02:22:53.370975 kubelet[2813]: I0513 02:22:53.370465 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-0dbb4c7115.novalocal" podStartSLOduration=1.370434713 podStartE2EDuration="1.370434713s" podCreationTimestamp="2025-05-13 02:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 02:22:53.359103557 +0000 UTC m=+1.166221181" watchObservedRunningTime="2025-05-13 02:22:53.370434713 +0000 UTC m=+1.177552337" May 13 02:22:53.387335 kubelet[2813]: I0513 02:22:53.387064 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-0dbb4c7115.novalocal" podStartSLOduration=1.387046383 podStartE2EDuration="1.387046383s" podCreationTimestamp="2025-05-13 02:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 02:22:53.371578498 +0000 UTC m=+1.178696122" watchObservedRunningTime="2025-05-13 02:22:53.387046383 +0000 UTC m=+1.194164017" May 13 02:22:54.970257 sudo[1754]: pam_unix(sudo:session): session closed for user root May 13 02:22:55.247334 sshd[1753]: Connection closed by 172.24.4.1 port 34188 May 13 02:22:55.248080 sshd-session[1750]: pam_unix(sshd:session): session closed for user core May 13 02:22:55.256894 systemd[1]: sshd@8-172.24.4.210:22-172.24.4.1:34188.service: Deactivated successfully. May 13 02:22:55.261377 systemd[1]: session-11.scope: Deactivated successfully. May 13 02:22:55.262304 systemd[1]: session-11.scope: Consumed 7.230s CPU time, 285.8M memory peak. May 13 02:22:55.267585 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. May 13 02:22:55.270522 systemd-logind[1455]: Removed session 11. May 13 02:23:06.375486 kubelet[2813]: I0513 02:23:06.374951 2813 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 02:23:06.376075 containerd[1480]: time="2025-05-13T02:23:06.375363748Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 02:23:06.377479 kubelet[2813]: I0513 02:23:06.377102 2813 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 02:23:06.426300 kubelet[2813]: I0513 02:23:06.426251 2813 topology_manager.go:215] "Topology Admit Handler" podUID="cdd47df0-6962-46c6-9e89-ec26c915bed5" podNamespace="kube-system" podName="kube-proxy-jqx72" May 13 02:23:06.432617 kubelet[2813]: I0513 02:23:06.429880 2813 topology_manager.go:215] "Topology Admit Handler" podUID="a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" podNamespace="kube-system" podName="cilium-6jj4n" May 13 02:23:06.433069 kubelet[2813]: W0513 02:23:06.433038 2813 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4284-0-0-n-0dbb4c7115.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-0dbb4c7115.novalocal' and this object May 13 02:23:06.433172 kubelet[2813]: E0513 02:23:06.433158 2813 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4284-0-0-n-0dbb4c7115.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-0dbb4c7115.novalocal' and this object May 13 02:23:06.435815 kubelet[2813]: W0513 02:23:06.435796 2813 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4284-0-0-n-0dbb4c7115.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-0dbb4c7115.novalocal' and this object May 13 02:23:06.435956 kubelet[2813]: E0513 02:23:06.435942 2813 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4284-0-0-n-0dbb4c7115.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-0dbb4c7115.novalocal' and this object May 13 02:23:06.441169 systemd[1]: Created slice kubepods-besteffort-podcdd47df0_6962_46c6_9e89_ec26c915bed5.slice - libcontainer container kubepods-besteffort-podcdd47df0_6962_46c6_9e89_ec26c915bed5.slice. May 13 02:23:06.458838 systemd[1]: Created slice kubepods-burstable-poda36d1900_2e43_45d8_8b83_bb11ec8f4b4f.slice - libcontainer container kubepods-burstable-poda36d1900_2e43_45d8_8b83_bb11ec8f4b4f.slice. May 13 02:23:06.463354 kubelet[2813]: I0513 02:23:06.462404 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-hostproc\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463354 kubelet[2813]: I0513 02:23:06.462441 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cni-path\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463354 kubelet[2813]: I0513 02:23:06.462488 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-host-proc-sys-kernel\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463354 kubelet[2813]: I0513 02:23:06.462514 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw89d\" (UniqueName: \"kubernetes.io/projected/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-kube-api-access-pw89d\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463354 kubelet[2813]: I0513 02:23:06.462539 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-host-proc-sys-net\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463354 kubelet[2813]: I0513 02:23:06.462557 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cdd47df0-6962-46c6-9e89-ec26c915bed5-kube-proxy\") pod \"kube-proxy-jqx72\" (UID: \"cdd47df0-6962-46c6-9e89-ec26c915bed5\") " pod="kube-system/kube-proxy-jqx72" May 13 02:23:06.463603 kubelet[2813]: I0513 02:23:06.462576 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-bpf-maps\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463603 kubelet[2813]: I0513 02:23:06.462601 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-clustermesh-secrets\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463603 kubelet[2813]: I0513 02:23:06.462620 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-config-path\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463603 kubelet[2813]: I0513 02:23:06.462647 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-cgroup\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463603 kubelet[2813]: I0513 02:23:06.462665 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-lib-modules\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463603 kubelet[2813]: I0513 02:23:06.462682 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdd47df0-6962-46c6-9e89-ec26c915bed5-lib-modules\") pod \"kube-proxy-jqx72\" (UID: \"cdd47df0-6962-46c6-9e89-ec26c915bed5\") " pod="kube-system/kube-proxy-jqx72" May 13 02:23:06.463748 kubelet[2813]: I0513 02:23:06.462702 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-run\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463748 kubelet[2813]: I0513 02:23:06.462721 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-etc-cni-netd\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463748 kubelet[2813]: I0513 02:23:06.462741 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jwt6\" (UniqueName: \"kubernetes.io/projected/cdd47df0-6962-46c6-9e89-ec26c915bed5-kube-api-access-4jwt6\") pod \"kube-proxy-jqx72\" (UID: \"cdd47df0-6962-46c6-9e89-ec26c915bed5\") " pod="kube-system/kube-proxy-jqx72" May 13 02:23:06.463748 kubelet[2813]: I0513 02:23:06.462758 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-xtables-lock\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463748 kubelet[2813]: I0513 02:23:06.462775 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-hubble-tls\") pod \"cilium-6jj4n\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " pod="kube-system/cilium-6jj4n" May 13 02:23:06.463748 kubelet[2813]: I0513 02:23:06.462794 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdd47df0-6962-46c6-9e89-ec26c915bed5-xtables-lock\") pod \"kube-proxy-jqx72\" (UID: \"cdd47df0-6962-46c6-9e89-ec26c915bed5\") " pod="kube-system/kube-proxy-jqx72" May 13 02:23:06.553772 kubelet[2813]: I0513 02:23:06.553724 2813 topology_manager.go:215] "Topology Admit Handler" podUID="33792f8b-cfcf-44d7-8a93-779a7b8a6b46" podNamespace="kube-system" podName="cilium-operator-599987898-f4cjx" May 13 02:23:06.568292 systemd[1]: Created slice kubepods-besteffort-pod33792f8b_cfcf_44d7_8a93_779a7b8a6b46.slice - libcontainer container kubepods-besteffort-pod33792f8b_cfcf_44d7_8a93_779a7b8a6b46.slice. May 13 02:23:06.663540 kubelet[2813]: I0513 02:23:06.663406 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33792f8b-cfcf-44d7-8a93-779a7b8a6b46-cilium-config-path\") pod \"cilium-operator-599987898-f4cjx\" (UID: \"33792f8b-cfcf-44d7-8a93-779a7b8a6b46\") " pod="kube-system/cilium-operator-599987898-f4cjx" May 13 02:23:06.663540 kubelet[2813]: I0513 02:23:06.663450 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvhh\" (UniqueName: \"kubernetes.io/projected/33792f8b-cfcf-44d7-8a93-779a7b8a6b46-kube-api-access-8zvhh\") pod \"cilium-operator-599987898-f4cjx\" (UID: \"33792f8b-cfcf-44d7-8a93-779a7b8a6b46\") " pod="kube-system/cilium-operator-599987898-f4cjx" May 13 02:23:07.578400 kubelet[2813]: E0513 02:23:07.578193 2813 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 13 02:23:07.578400 kubelet[2813]: E0513 02:23:07.578367 2813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdd47df0-6962-46c6-9e89-ec26c915bed5-kube-proxy podName:cdd47df0-6962-46c6-9e89-ec26c915bed5 nodeName:}" failed. No retries permitted until 2025-05-13 02:23:08.078323343 +0000 UTC m=+15.885441017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/cdd47df0-6962-46c6-9e89-ec26c915bed5-kube-proxy") pod "kube-proxy-jqx72" (UID: "cdd47df0-6962-46c6-9e89-ec26c915bed5") : failed to sync configmap cache: timed out waiting for the condition May 13 02:23:07.667726 containerd[1480]: time="2025-05-13T02:23:07.667609860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jj4n,Uid:a36d1900-2e43-45d8-8b83-bb11ec8f4b4f,Namespace:kube-system,Attempt:0,}" May 13 02:23:07.716531 containerd[1480]: time="2025-05-13T02:23:07.716389444Z" level=info msg="connecting to shim f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621" address="unix:///run/containerd/s/2ba3a2e2e0fb161ee103d21157a14b37b249f984fd5f72299e36ac013d72eb0f" namespace=k8s.io protocol=ttrpc version=3 May 13 02:23:07.774607 systemd[1]: Started cri-containerd-f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621.scope - libcontainer container f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621. May 13 02:23:07.799027 containerd[1480]: time="2025-05-13T02:23:07.798818331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f4cjx,Uid:33792f8b-cfcf-44d7-8a93-779a7b8a6b46,Namespace:kube-system,Attempt:0,}" May 13 02:23:07.810912 containerd[1480]: time="2025-05-13T02:23:07.810710227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jj4n,Uid:a36d1900-2e43-45d8-8b83-bb11ec8f4b4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\"" May 13 02:23:07.813312 containerd[1480]: time="2025-05-13T02:23:07.813123089Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 02:23:07.841235 containerd[1480]: time="2025-05-13T02:23:07.841077682Z" level=info msg="connecting to shim d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367" address="unix:///run/containerd/s/a16c8e281390f4f14c210dc4f944d41f06f90857c7052804a0110b777c3a0839" namespace=k8s.io protocol=ttrpc version=3 May 13 02:23:07.872637 systemd[1]: Started cri-containerd-d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367.scope - libcontainer container d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367. May 13 02:23:07.916971 containerd[1480]: time="2025-05-13T02:23:07.916931272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f4cjx,Uid:33792f8b-cfcf-44d7-8a93-779a7b8a6b46,Namespace:kube-system,Attempt:0,} returns sandbox id \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\"" May 13 02:23:08.256646 containerd[1480]: time="2025-05-13T02:23:08.256402804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqx72,Uid:cdd47df0-6962-46c6-9e89-ec26c915bed5,Namespace:kube-system,Attempt:0,}" May 13 02:23:08.302820 containerd[1480]: time="2025-05-13T02:23:08.302084788Z" level=info msg="connecting to shim 1971934326f606970075c583184b4c60cabfab5b67f17d592701eeb957484916" address="unix:///run/containerd/s/e9789ea945eca6ed1df5ba01350bb8b49f91f2082d7530dfb40af7032166d7dc" namespace=k8s.io protocol=ttrpc version=3 May 13 02:23:08.351826 systemd[1]: Started cri-containerd-1971934326f606970075c583184b4c60cabfab5b67f17d592701eeb957484916.scope - libcontainer container 1971934326f606970075c583184b4c60cabfab5b67f17d592701eeb957484916. May 13 02:23:08.402087 containerd[1480]: time="2025-05-13T02:23:08.402029384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqx72,Uid:cdd47df0-6962-46c6-9e89-ec26c915bed5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1971934326f606970075c583184b4c60cabfab5b67f17d592701eeb957484916\"" May 13 02:23:08.406147 containerd[1480]: time="2025-05-13T02:23:08.406098883Z" level=info msg="CreateContainer within sandbox \"1971934326f606970075c583184b4c60cabfab5b67f17d592701eeb957484916\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 02:23:08.422939 containerd[1480]: time="2025-05-13T02:23:08.422892949Z" level=info msg="Container 610ffd82e987e834b79cfd01c647dc12a94d371fb0825ec04e4e275704029832: CDI devices from CRI Config.CDIDevices: []" May 13 02:23:08.433858 containerd[1480]: time="2025-05-13T02:23:08.433819653Z" level=info msg="CreateContainer within sandbox \"1971934326f606970075c583184b4c60cabfab5b67f17d592701eeb957484916\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"610ffd82e987e834b79cfd01c647dc12a94d371fb0825ec04e4e275704029832\"" May 13 02:23:08.434545 containerd[1480]: time="2025-05-13T02:23:08.434351575Z" level=info msg="StartContainer for \"610ffd82e987e834b79cfd01c647dc12a94d371fb0825ec04e4e275704029832\"" May 13 02:23:08.436395 containerd[1480]: time="2025-05-13T02:23:08.436369332Z" level=info msg="connecting to shim 610ffd82e987e834b79cfd01c647dc12a94d371fb0825ec04e4e275704029832" address="unix:///run/containerd/s/e9789ea945eca6ed1df5ba01350bb8b49f91f2082d7530dfb40af7032166d7dc" protocol=ttrpc version=3 May 13 02:23:08.459604 systemd[1]: Started cri-containerd-610ffd82e987e834b79cfd01c647dc12a94d371fb0825ec04e4e275704029832.scope - libcontainer container 610ffd82e987e834b79cfd01c647dc12a94d371fb0825ec04e4e275704029832. May 13 02:23:08.502949 containerd[1480]: time="2025-05-13T02:23:08.502900894Z" level=info msg="StartContainer for \"610ffd82e987e834b79cfd01c647dc12a94d371fb0825ec04e4e275704029832\" returns successfully" May 13 02:23:09.407518 kubelet[2813]: I0513 02:23:09.404984 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jqx72" podStartSLOduration=3.404948187 podStartE2EDuration="3.404948187s" podCreationTimestamp="2025-05-13 02:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 02:23:09.402769076 +0000 UTC m=+17.209886780" watchObservedRunningTime="2025-05-13 02:23:09.404948187 +0000 UTC m=+17.212065871" May 13 02:23:12.872311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286771847.mount: Deactivated successfully. May 13 02:23:15.187283 containerd[1480]: time="2025-05-13T02:23:15.187233053Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:23:15.188534 containerd[1480]: time="2025-05-13T02:23:15.188495607Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 02:23:15.189665 containerd[1480]: time="2025-05-13T02:23:15.189645047Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:23:15.191256 containerd[1480]: time="2025-05-13T02:23:15.191211241Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.378053367s" May 13 02:23:15.191308 containerd[1480]: time="2025-05-13T02:23:15.191257217Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 02:23:15.192990 containerd[1480]: time="2025-05-13T02:23:15.192951031Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 02:23:15.194776 containerd[1480]: time="2025-05-13T02:23:15.194741186Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 02:23:15.208485 containerd[1480]: time="2025-05-13T02:23:15.208375714Z" level=info msg="Container 9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c: CDI devices from CRI Config.CDIDevices: []" May 13 02:23:15.222271 containerd[1480]: time="2025-05-13T02:23:15.222240606Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\"" May 13 02:23:15.223005 containerd[1480]: time="2025-05-13T02:23:15.222929960Z" level=info msg="StartContainer for \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\"" May 13 02:23:15.224069 containerd[1480]: time="2025-05-13T02:23:15.223965147Z" level=info msg="connecting to shim 9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c" address="unix:///run/containerd/s/2ba3a2e2e0fb161ee103d21157a14b37b249f984fd5f72299e36ac013d72eb0f" protocol=ttrpc version=3 May 13 02:23:15.252613 systemd[1]: Started cri-containerd-9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c.scope - libcontainer container 9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c. May 13 02:23:15.292968 containerd[1480]: time="2025-05-13T02:23:15.292915539Z" level=info msg="StartContainer for \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\" returns successfully" May 13 02:23:15.300683 systemd[1]: cri-containerd-9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c.scope: Deactivated successfully. May 13 02:23:15.305138 containerd[1480]: time="2025-05-13T02:23:15.305052492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\" id:\"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\" pid:3213 exited_at:{seconds:1747102995 nanos:304519891}" May 13 02:23:15.305298 containerd[1480]: time="2025-05-13T02:23:15.305182466Z" level=info msg="received exit event container_id:\"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\" id:\"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\" pid:3213 exited_at:{seconds:1747102995 nanos:304519891}" May 13 02:23:15.327022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c-rootfs.mount: Deactivated successfully. May 13 02:23:17.407384 containerd[1480]: time="2025-05-13T02:23:17.406402753Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 02:23:17.427519 containerd[1480]: time="2025-05-13T02:23:17.425965434Z" level=info msg="Container 878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5: CDI devices from CRI Config.CDIDevices: []" May 13 02:23:17.455265 containerd[1480]: time="2025-05-13T02:23:17.455156006Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\"" May 13 02:23:17.459602 containerd[1480]: time="2025-05-13T02:23:17.456750002Z" level=info msg="StartContainer for \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\"" May 13 02:23:17.460534 containerd[1480]: time="2025-05-13T02:23:17.460249368Z" level=info msg="connecting to shim 878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5" address="unix:///run/containerd/s/2ba3a2e2e0fb161ee103d21157a14b37b249f984fd5f72299e36ac013d72eb0f" protocol=ttrpc version=3 May 13 02:23:17.505632 systemd[1]: Started cri-containerd-878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5.scope - libcontainer container 878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5. May 13 02:23:17.546213 containerd[1480]: time="2025-05-13T02:23:17.546170475Z" level=info msg="StartContainer for \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\" returns successfully" May 13 02:23:17.551575 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 02:23:17.552261 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 02:23:17.553155 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 02:23:17.556822 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 02:23:17.560360 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 02:23:17.561100 containerd[1480]: time="2025-05-13T02:23:17.561069252Z" level=info msg="received exit event container_id:\"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\" id:\"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\" pid:3261 exited_at:{seconds:1747102997 nanos:560780028}" May 13 02:23:17.561180 systemd[1]: cri-containerd-878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5.scope: Deactivated successfully. May 13 02:23:17.562648 containerd[1480]: time="2025-05-13T02:23:17.562581895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\" id:\"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\" pid:3261 exited_at:{seconds:1747102997 nanos:560780028}" May 13 02:23:17.595309 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 02:23:18.410344 containerd[1480]: time="2025-05-13T02:23:18.410035403Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 02:23:18.432599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5-rootfs.mount: Deactivated successfully. May 13 02:23:18.442792 containerd[1480]: time="2025-05-13T02:23:18.442668488Z" level=info msg="Container 1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7: CDI devices from CRI Config.CDIDevices: []" May 13 02:23:18.460373 containerd[1480]: time="2025-05-13T02:23:18.460075073Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\"" May 13 02:23:18.461712 containerd[1480]: time="2025-05-13T02:23:18.461687102Z" level=info msg="StartContainer for \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\"" May 13 02:23:18.464487 containerd[1480]: time="2025-05-13T02:23:18.464272841Z" level=info msg="connecting to shim 1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7" address="unix:///run/containerd/s/2ba3a2e2e0fb161ee103d21157a14b37b249f984fd5f72299e36ac013d72eb0f" protocol=ttrpc version=3 May 13 02:23:18.504624 systemd[1]: Started cri-containerd-1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7.scope - libcontainer container 1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7. May 13 02:23:18.577008 systemd[1]: cri-containerd-1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7.scope: Deactivated successfully. May 13 02:23:18.583410 containerd[1480]: time="2025-05-13T02:23:18.583287538Z" level=info msg="received exit event container_id:\"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\" id:\"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\" pid:3320 exited_at:{seconds:1747102998 nanos:579151126}" May 13 02:23:18.583932 containerd[1480]: time="2025-05-13T02:23:18.583911692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\" id:\"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\" pid:3320 exited_at:{seconds:1747102998 nanos:579151126}" May 13 02:23:18.585880 containerd[1480]: time="2025-05-13T02:23:18.585861875Z" level=info msg="StartContainer for \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\" returns successfully" May 13 02:23:18.615104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7-rootfs.mount: Deactivated successfully. May 13 02:23:18.998524 containerd[1480]: time="2025-05-13T02:23:18.998469007Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:23:18.999735 containerd[1480]: time="2025-05-13T02:23:18.999674302Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 02:23:19.001103 containerd[1480]: time="2025-05-13T02:23:19.001055928Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 02:23:19.002529 containerd[1480]: time="2025-05-13T02:23:19.002349268Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.809263994s" May 13 02:23:19.002529 containerd[1480]: time="2025-05-13T02:23:19.002391637Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 02:23:19.004666 containerd[1480]: time="2025-05-13T02:23:19.004592393Z" level=info msg="CreateContainer within sandbox \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 02:23:19.013564 containerd[1480]: time="2025-05-13T02:23:19.013380703Z" level=info msg="Container fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a: CDI devices from CRI Config.CDIDevices: []" May 13 02:23:19.035602 containerd[1480]: time="2025-05-13T02:23:19.035514224Z" level=info msg="CreateContainer within sandbox \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\"" May 13 02:23:19.036478 containerd[1480]: time="2025-05-13T02:23:19.036045102Z" level=info msg="StartContainer for \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\"" May 13 02:23:19.037036 containerd[1480]: time="2025-05-13T02:23:19.037000317Z" level=info msg="connecting to shim fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a" address="unix:///run/containerd/s/a16c8e281390f4f14c210dc4f944d41f06f90857c7052804a0110b777c3a0839" protocol=ttrpc version=3 May 13 02:23:19.057624 systemd[1]: Started cri-containerd-fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a.scope - libcontainer container fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a. May 13 02:23:19.094087 containerd[1480]: time="2025-05-13T02:23:19.094044716Z" level=info msg="StartContainer for \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" returns successfully" May 13 02:23:19.419665 containerd[1480]: time="2025-05-13T02:23:19.418697300Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 02:23:19.437362 containerd[1480]: time="2025-05-13T02:23:19.437315608Z" level=info msg="Container d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2: CDI devices from CRI Config.CDIDevices: []" May 13 02:23:19.457800 containerd[1480]: time="2025-05-13T02:23:19.457759205Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\"" May 13 02:23:19.462036 containerd[1480]: time="2025-05-13T02:23:19.461989403Z" level=info msg="StartContainer for \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\"" May 13 02:23:19.464071 containerd[1480]: time="2025-05-13T02:23:19.463197132Z" level=info msg="connecting to shim d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2" address="unix:///run/containerd/s/2ba3a2e2e0fb161ee103d21157a14b37b249f984fd5f72299e36ac013d72eb0f" protocol=ttrpc version=3 May 13 02:23:19.512684 systemd[1]: Started cri-containerd-d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2.scope - libcontainer container d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2. May 13 02:23:19.514882 kubelet[2813]: I0513 02:23:19.514136 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-f4cjx" podStartSLOduration=2.429552107 podStartE2EDuration="13.514116073s" podCreationTimestamp="2025-05-13 02:23:06 +0000 UTC" firstStartedPulling="2025-05-13 02:23:07.918539349 +0000 UTC m=+15.725656973" lastFinishedPulling="2025-05-13 02:23:19.003103305 +0000 UTC m=+26.810220939" observedRunningTime="2025-05-13 02:23:19.451558056 +0000 UTC m=+27.258675690" watchObservedRunningTime="2025-05-13 02:23:19.514116073 +0000 UTC m=+27.321233697" May 13 02:23:19.563527 systemd[1]: cri-containerd-d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2.scope: Deactivated successfully. May 13 02:23:19.565611 containerd[1480]: time="2025-05-13T02:23:19.565492144Z" level=info msg="received exit event container_id:\"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\" id:\"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\" pid:3395 exited_at:{seconds:1747102999 nanos:564317897}" May 13 02:23:19.565834 containerd[1480]: time="2025-05-13T02:23:19.565769915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\" id:\"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\" pid:3395 exited_at:{seconds:1747102999 nanos:564317897}" May 13 02:23:19.566849 containerd[1480]: time="2025-05-13T02:23:19.566382255Z" level=info msg="StartContainer for \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\" returns successfully" May 13 02:23:19.610000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2-rootfs.mount: Deactivated successfully. May 13 02:23:20.442097 containerd[1480]: time="2025-05-13T02:23:20.436902583Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 02:23:20.469578 containerd[1480]: time="2025-05-13T02:23:20.469445425Z" level=info msg="Container 3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511: CDI devices from CRI Config.CDIDevices: []" May 13 02:23:20.488742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362753386.mount: Deactivated successfully. May 13 02:23:20.507313 containerd[1480]: time="2025-05-13T02:23:20.507255109Z" level=info msg="CreateContainer within sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\"" May 13 02:23:20.508645 containerd[1480]: time="2025-05-13T02:23:20.508091752Z" level=info msg="StartContainer for \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\"" May 13 02:23:20.511808 containerd[1480]: time="2025-05-13T02:23:20.511729816Z" level=info msg="connecting to shim 3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511" address="unix:///run/containerd/s/2ba3a2e2e0fb161ee103d21157a14b37b249f984fd5f72299e36ac013d72eb0f" protocol=ttrpc version=3 May 13 02:23:20.545595 systemd[1]: Started cri-containerd-3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511.scope - libcontainer container 3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511. May 13 02:23:20.595958 containerd[1480]: time="2025-05-13T02:23:20.595919185Z" level=info msg="StartContainer for \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" returns successfully" May 13 02:23:20.672307 containerd[1480]: time="2025-05-13T02:23:20.672072790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" id:\"ffa6fdd6bb39e9272f9ae8bf0acb52e6564ac71d0249ad7d5514ca2306d21b50\" pid:3461 exited_at:{seconds:1747103000 nanos:671758940}" May 13 02:23:20.729842 kubelet[2813]: I0513 02:23:20.729373 2813 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 02:23:20.765388 kubelet[2813]: I0513 02:23:20.765326 2813 topology_manager.go:215] "Topology Admit Handler" podUID="d932ba71-8ef5-47c9-a64b-b05384299a7f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t8mw4" May 13 02:23:20.776401 systemd[1]: Created slice kubepods-burstable-podd932ba71_8ef5_47c9_a64b_b05384299a7f.slice - libcontainer container kubepods-burstable-podd932ba71_8ef5_47c9_a64b_b05384299a7f.slice. May 13 02:23:20.778051 kubelet[2813]: I0513 02:23:20.778017 2813 topology_manager.go:215] "Topology Admit Handler" podUID="7c2a48ed-e2c7-43ec-b1a8-8b90e9684e78" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bv6z2" May 13 02:23:20.790817 systemd[1]: Created slice kubepods-burstable-pod7c2a48ed_e2c7_43ec_b1a8_8b90e9684e78.slice - libcontainer container kubepods-burstable-pod7c2a48ed_e2c7_43ec_b1a8_8b90e9684e78.slice. May 13 02:23:20.865022 kubelet[2813]: I0513 02:23:20.864984 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpphn\" (UniqueName: \"kubernetes.io/projected/d932ba71-8ef5-47c9-a64b-b05384299a7f-kube-api-access-lpphn\") pod \"coredns-7db6d8ff4d-t8mw4\" (UID: \"d932ba71-8ef5-47c9-a64b-b05384299a7f\") " pod="kube-system/coredns-7db6d8ff4d-t8mw4" May 13 02:23:20.865269 kubelet[2813]: I0513 02:23:20.865197 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d932ba71-8ef5-47c9-a64b-b05384299a7f-config-volume\") pod \"coredns-7db6d8ff4d-t8mw4\" (UID: \"d932ba71-8ef5-47c9-a64b-b05384299a7f\") " pod="kube-system/coredns-7db6d8ff4d-t8mw4" May 13 02:23:20.966161 kubelet[2813]: I0513 02:23:20.966035 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c2a48ed-e2c7-43ec-b1a8-8b90e9684e78-config-volume\") pod \"coredns-7db6d8ff4d-bv6z2\" (UID: \"7c2a48ed-e2c7-43ec-b1a8-8b90e9684e78\") " pod="kube-system/coredns-7db6d8ff4d-bv6z2" May 13 02:23:20.966161 kubelet[2813]: I0513 02:23:20.966080 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww646\" (UniqueName: \"kubernetes.io/projected/7c2a48ed-e2c7-43ec-b1a8-8b90e9684e78-kube-api-access-ww646\") pod \"coredns-7db6d8ff4d-bv6z2\" (UID: \"7c2a48ed-e2c7-43ec-b1a8-8b90e9684e78\") " pod="kube-system/coredns-7db6d8ff4d-bv6z2" May 13 02:23:21.083589 containerd[1480]: time="2025-05-13T02:23:21.082791238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t8mw4,Uid:d932ba71-8ef5-47c9-a64b-b05384299a7f,Namespace:kube-system,Attempt:0,}" May 13 02:23:21.096635 containerd[1480]: time="2025-05-13T02:23:21.096593774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bv6z2,Uid:7c2a48ed-e2c7-43ec-b1a8-8b90e9684e78,Namespace:kube-system,Attempt:0,}" May 13 02:23:21.487129 kubelet[2813]: I0513 02:23:21.486877 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6jj4n" podStartSLOduration=8.10696356 podStartE2EDuration="15.486843814s" podCreationTimestamp="2025-05-13 02:23:06 +0000 UTC" firstStartedPulling="2025-05-13 02:23:07.812158282 +0000 UTC m=+15.619275906" lastFinishedPulling="2025-05-13 02:23:15.192038536 +0000 UTC m=+22.999156160" observedRunningTime="2025-05-13 02:23:21.484591393 +0000 UTC m=+29.291709067" watchObservedRunningTime="2025-05-13 02:23:21.486843814 +0000 UTC m=+29.293961539" May 13 02:23:22.798764 systemd-networkd[1385]: cilium_host: Link UP May 13 02:23:22.799130 systemd-networkd[1385]: cilium_net: Link UP May 13 02:23:22.804631 systemd-networkd[1385]: cilium_net: Gained carrier May 13 02:23:22.805956 systemd-networkd[1385]: cilium_host: Gained carrier May 13 02:23:22.806267 systemd-networkd[1385]: cilium_net: Gained IPv6LL May 13 02:23:22.806697 systemd-networkd[1385]: cilium_host: Gained IPv6LL May 13 02:23:22.916972 systemd-networkd[1385]: cilium_vxlan: Link UP May 13 02:23:22.917268 systemd-networkd[1385]: cilium_vxlan: Gained carrier May 13 02:23:23.221638 kernel: NET: Registered PF_ALG protocol family May 13 02:23:23.888643 systemd-networkd[1385]: lxc_health: Link UP May 13 02:23:23.895441 systemd-networkd[1385]: lxc_health: Gained carrier May 13 02:23:24.122292 systemd-networkd[1385]: lxcdf62c702aa3b: Link UP May 13 02:23:24.124553 kernel: eth0: renamed from tmpb7bee May 13 02:23:24.129181 systemd-networkd[1385]: lxcdf62c702aa3b: Gained carrier May 13 02:23:24.156537 kernel: eth0: renamed from tmp2f234 May 13 02:23:24.154355 systemd-networkd[1385]: lxc19a9123da2a7: Link UP May 13 02:23:24.161973 systemd-networkd[1385]: lxc19a9123da2a7: Gained carrier May 13 02:23:24.361658 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL May 13 02:23:25.769590 systemd-networkd[1385]: lxc_health: Gained IPv6LL May 13 02:23:25.897779 systemd-networkd[1385]: lxcdf62c702aa3b: Gained IPv6LL May 13 02:23:26.153785 systemd-networkd[1385]: lxc19a9123da2a7: Gained IPv6LL May 13 02:23:28.637058 containerd[1480]: time="2025-05-13T02:23:28.637009790Z" level=info msg="connecting to shim 2f2346d8860fbc791b0d6578370af52436aee5aeaade5b0f8fee9fd3aca43460" address="unix:///run/containerd/s/25d54a597ed3b5372406f5f9c63a9a86f31b0ae4fa5c63a5b645d6e2e49a2ab9" namespace=k8s.io protocol=ttrpc version=3 May 13 02:23:28.685658 systemd[1]: Started cri-containerd-2f2346d8860fbc791b0d6578370af52436aee5aeaade5b0f8fee9fd3aca43460.scope - libcontainer container 2f2346d8860fbc791b0d6578370af52436aee5aeaade5b0f8fee9fd3aca43460. May 13 02:23:28.737766 containerd[1480]: time="2025-05-13T02:23:28.737726223Z" level=info msg="connecting to shim b7bee0492a8c2b749359ce0d49c04567aa99c4cf42336b99e48d1c5f6d5206b1" address="unix:///run/containerd/s/e3f84c04afb3f8dc1b3d5b3a2bbc7a75120240a4f1a114efe3e18b3c537b0ba4" namespace=k8s.io protocol=ttrpc version=3 May 13 02:23:28.773663 systemd[1]: Started cri-containerd-b7bee0492a8c2b749359ce0d49c04567aa99c4cf42336b99e48d1c5f6d5206b1.scope - libcontainer container b7bee0492a8c2b749359ce0d49c04567aa99c4cf42336b99e48d1c5f6d5206b1. May 13 02:23:28.779281 containerd[1480]: time="2025-05-13T02:23:28.779239105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bv6z2,Uid:7c2a48ed-e2c7-43ec-b1a8-8b90e9684e78,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f2346d8860fbc791b0d6578370af52436aee5aeaade5b0f8fee9fd3aca43460\"" May 13 02:23:28.786697 containerd[1480]: time="2025-05-13T02:23:28.786623718Z" level=info msg="CreateContainer within sandbox \"2f2346d8860fbc791b0d6578370af52436aee5aeaade5b0f8fee9fd3aca43460\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 02:23:28.810184 containerd[1480]: time="2025-05-13T02:23:28.807613192Z" level=info msg="Container b908df277e01a0923bce0317acc894197621ee91eaf9f6a0a308a1b35fc44947: CDI devices from CRI Config.CDIDevices: []" May 13 02:23:28.824884 containerd[1480]: time="2025-05-13T02:23:28.824837546Z" level=info msg="CreateContainer within sandbox \"2f2346d8860fbc791b0d6578370af52436aee5aeaade5b0f8fee9fd3aca43460\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b908df277e01a0923bce0317acc894197621ee91eaf9f6a0a308a1b35fc44947\"" May 13 02:23:28.826723 containerd[1480]: time="2025-05-13T02:23:28.826690034Z" level=info msg="StartContainer for \"b908df277e01a0923bce0317acc894197621ee91eaf9f6a0a308a1b35fc44947\"" May 13 02:23:28.830473 containerd[1480]: time="2025-05-13T02:23:28.829773873Z" level=info msg="connecting to shim b908df277e01a0923bce0317acc894197621ee91eaf9f6a0a308a1b35fc44947" address="unix:///run/containerd/s/25d54a597ed3b5372406f5f9c63a9a86f31b0ae4fa5c63a5b645d6e2e49a2ab9" protocol=ttrpc version=3 May 13 02:23:28.863614 systemd[1]: Started cri-containerd-b908df277e01a0923bce0317acc894197621ee91eaf9f6a0a308a1b35fc44947.scope - libcontainer container b908df277e01a0923bce0317acc894197621ee91eaf9f6a0a308a1b35fc44947. May 13 02:23:28.888599 containerd[1480]: time="2025-05-13T02:23:28.888392808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t8mw4,Uid:d932ba71-8ef5-47c9-a64b-b05384299a7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7bee0492a8c2b749359ce0d49c04567aa99c4cf42336b99e48d1c5f6d5206b1\"" May 13 02:23:28.895804 containerd[1480]: time="2025-05-13T02:23:28.895748377Z" level=info msg="CreateContainer within sandbox \"b7bee0492a8c2b749359ce0d49c04567aa99c4cf42336b99e48d1c5f6d5206b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 02:23:28.913433 containerd[1480]: time="2025-05-13T02:23:28.912951301Z" level=info msg="Container 999ac4e5285e0a2b9b3e9cf48cc98c1bf5523cd2e3a8303392c42d63b32a6945: CDI devices from CRI Config.CDIDevices: []" May 13 02:23:28.914893 containerd[1480]: time="2025-05-13T02:23:28.914841640Z" level=info msg="StartContainer for \"b908df277e01a0923bce0317acc894197621ee91eaf9f6a0a308a1b35fc44947\" returns successfully" May 13 02:23:28.924536 containerd[1480]: time="2025-05-13T02:23:28.924494191Z" level=info msg="CreateContainer within sandbox \"b7bee0492a8c2b749359ce0d49c04567aa99c4cf42336b99e48d1c5f6d5206b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"999ac4e5285e0a2b9b3e9cf48cc98c1bf5523cd2e3a8303392c42d63b32a6945\"" May 13 02:23:28.925264 containerd[1480]: time="2025-05-13T02:23:28.925151514Z" level=info msg="StartContainer for \"999ac4e5285e0a2b9b3e9cf48cc98c1bf5523cd2e3a8303392c42d63b32a6945\"" May 13 02:23:28.926486 containerd[1480]: time="2025-05-13T02:23:28.926117488Z" level=info msg="connecting to shim 999ac4e5285e0a2b9b3e9cf48cc98c1bf5523cd2e3a8303392c42d63b32a6945" address="unix:///run/containerd/s/e3f84c04afb3f8dc1b3d5b3a2bbc7a75120240a4f1a114efe3e18b3c537b0ba4" protocol=ttrpc version=3 May 13 02:23:28.954825 systemd[1]: Started cri-containerd-999ac4e5285e0a2b9b3e9cf48cc98c1bf5523cd2e3a8303392c42d63b32a6945.scope - libcontainer container 999ac4e5285e0a2b9b3e9cf48cc98c1bf5523cd2e3a8303392c42d63b32a6945. May 13 02:23:28.996153 containerd[1480]: time="2025-05-13T02:23:28.996118921Z" level=info msg="StartContainer for \"999ac4e5285e0a2b9b3e9cf48cc98c1bf5523cd2e3a8303392c42d63b32a6945\" returns successfully" May 13 02:23:29.501600 kubelet[2813]: I0513 02:23:29.501446 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t8mw4" podStartSLOduration=23.501415953 podStartE2EDuration="23.501415953s" podCreationTimestamp="2025-05-13 02:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 02:23:29.49799903 +0000 UTC m=+37.305116704" watchObservedRunningTime="2025-05-13 02:23:29.501415953 +0000 UTC m=+37.308533628" May 13 02:23:29.528383 kubelet[2813]: I0513 02:23:29.528278 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bv6z2" podStartSLOduration=23.528250637 podStartE2EDuration="23.528250637s" podCreationTimestamp="2025-05-13 02:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 02:23:29.526031462 +0000 UTC m=+37.333149136" watchObservedRunningTime="2025-05-13 02:23:29.528250637 +0000 UTC m=+37.335368311" May 13 02:27:19.754199 systemd[1]: Started sshd@9-172.24.4.210:22-172.24.4.1:45050.service - OpenSSH per-connection server daemon (172.24.4.1:45050). May 13 02:27:21.066180 sshd[4145]: Accepted publickey for core from 172.24.4.1 port 45050 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:27:21.072686 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:27:21.102672 systemd-logind[1455]: New session 12 of user core. May 13 02:27:21.115320 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 02:27:21.962126 sshd[4147]: Connection closed by 172.24.4.1 port 45050 May 13 02:27:21.964393 sshd-session[4145]: pam_unix(sshd:session): session closed for user core May 13 02:27:21.976579 systemd[1]: sshd@9-172.24.4.210:22-172.24.4.1:45050.service: Deactivated successfully. May 13 02:27:21.985704 systemd[1]: session-12.scope: Deactivated successfully. May 13 02:27:21.988514 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. May 13 02:27:21.991944 systemd-logind[1455]: Removed session 12. May 13 02:27:26.990781 systemd[1]: Started sshd@10-172.24.4.210:22-172.24.4.1:44322.service - OpenSSH per-connection server daemon (172.24.4.1:44322). May 13 02:27:28.267526 sshd[4164]: Accepted publickey for core from 172.24.4.1 port 44322 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:27:28.269089 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:27:28.280212 systemd-logind[1455]: New session 13 of user core. May 13 02:27:28.286768 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 02:27:28.888061 sshd[4166]: Connection closed by 172.24.4.1 port 44322 May 13 02:27:28.889023 sshd-session[4164]: pam_unix(sshd:session): session closed for user core May 13 02:27:28.895710 systemd[1]: sshd@10-172.24.4.210:22-172.24.4.1:44322.service: Deactivated successfully. May 13 02:27:28.900264 systemd[1]: session-13.scope: Deactivated successfully. May 13 02:27:28.904402 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. May 13 02:27:28.906771 systemd-logind[1455]: Removed session 13. May 13 02:27:33.913083 systemd[1]: Started sshd@11-172.24.4.210:22-172.24.4.1:39210.service - OpenSSH per-connection server daemon (172.24.4.1:39210). May 13 02:27:35.248935 sshd[4179]: Accepted publickey for core from 172.24.4.1 port 39210 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:27:35.252011 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:27:35.265637 systemd-logind[1455]: New session 14 of user core. May 13 02:27:35.275766 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 02:27:36.017961 sshd[4182]: Connection closed by 172.24.4.1 port 39210 May 13 02:27:36.018813 sshd-session[4179]: pam_unix(sshd:session): session closed for user core May 13 02:27:36.025937 systemd[1]: sshd@11-172.24.4.210:22-172.24.4.1:39210.service: Deactivated successfully. May 13 02:27:36.029673 systemd[1]: session-14.scope: Deactivated successfully. May 13 02:27:36.031224 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. May 13 02:27:36.032896 systemd-logind[1455]: Removed session 14. May 13 02:27:41.055759 systemd[1]: Started sshd@12-172.24.4.210:22-172.24.4.1:39216.service - OpenSSH per-connection server daemon (172.24.4.1:39216). May 13 02:27:42.285039 sshd[4196]: Accepted publickey for core from 172.24.4.1 port 39216 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:27:42.288840 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:27:42.303586 systemd-logind[1455]: New session 15 of user core. May 13 02:27:42.316006 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 02:27:43.046517 sshd[4198]: Connection closed by 172.24.4.1 port 39216 May 13 02:27:43.048015 sshd-session[4196]: pam_unix(sshd:session): session closed for user core May 13 02:27:43.065252 systemd[1]: sshd@12-172.24.4.210:22-172.24.4.1:39216.service: Deactivated successfully. May 13 02:27:43.071603 systemd[1]: session-15.scope: Deactivated successfully. May 13 02:27:43.074616 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. May 13 02:27:43.081736 systemd[1]: Started sshd@13-172.24.4.210:22-172.24.4.1:39226.service - OpenSSH per-connection server daemon (172.24.4.1:39226). May 13 02:27:43.086293 systemd-logind[1455]: Removed session 15. May 13 02:27:44.357008 sshd[4210]: Accepted publickey for core from 172.24.4.1 port 39226 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:27:44.361794 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:27:44.379588 systemd-logind[1455]: New session 16 of user core. May 13 02:27:44.390946 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 02:27:45.186261 sshd[4213]: Connection closed by 172.24.4.1 port 39226 May 13 02:27:45.191136 sshd-session[4210]: pam_unix(sshd:session): session closed for user core May 13 02:27:45.205190 systemd[1]: sshd@13-172.24.4.210:22-172.24.4.1:39226.service: Deactivated successfully. May 13 02:27:45.208998 systemd[1]: session-16.scope: Deactivated successfully. May 13 02:27:45.210414 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. May 13 02:27:45.213869 systemd[1]: Started sshd@14-172.24.4.210:22-172.24.4.1:37840.service - OpenSSH per-connection server daemon (172.24.4.1:37840). May 13 02:27:45.215791 systemd-logind[1455]: Removed session 16. May 13 02:27:46.394983 sshd[4221]: Accepted publickey for core from 172.24.4.1 port 37840 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:27:46.399285 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:27:46.413673 systemd-logind[1455]: New session 17 of user core. May 13 02:27:46.421823 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 02:27:47.269695 sshd[4224]: Connection closed by 172.24.4.1 port 37840 May 13 02:27:47.271191 sshd-session[4221]: pam_unix(sshd:session): session closed for user core May 13 02:27:47.280300 systemd[1]: sshd@14-172.24.4.210:22-172.24.4.1:37840.service: Deactivated successfully. May 13 02:27:47.288003 systemd[1]: session-17.scope: Deactivated successfully. May 13 02:27:47.291221 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. May 13 02:27:47.294392 systemd-logind[1455]: Removed session 17. May 13 02:27:47.300715 containerd[1480]: time="2025-05-13T02:27:47.299945478Z" level=warning msg="container event discarded" container=ed9e4b23d0015bf51ed9c5d8d1b51511bbf84c3ba200e5c140a39c80b011ae4c type=CONTAINER_CREATED_EVENT May 13 02:27:47.313045 containerd[1480]: time="2025-05-13T02:27:47.312856055Z" level=warning msg="container event discarded" container=ed9e4b23d0015bf51ed9c5d8d1b51511bbf84c3ba200e5c140a39c80b011ae4c type=CONTAINER_STARTED_EVENT May 13 02:27:47.313045 containerd[1480]: time="2025-05-13T02:27:47.312954920Z" level=warning msg="container event discarded" container=7c6a2e09ab26fb16ad6435ed5cd39807e4fa50092bf43476c239d39e1f6f156d type=CONTAINER_CREATED_EVENT May 13 02:27:47.313045 containerd[1480]: time="2025-05-13T02:27:47.312979738Z" level=warning msg="container event discarded" container=7c6a2e09ab26fb16ad6435ed5cd39807e4fa50092bf43476c239d39e1f6f156d type=CONTAINER_STARTED_EVENT May 13 02:27:47.335424 containerd[1480]: time="2025-05-13T02:27:47.335324883Z" level=warning msg="container event discarded" container=eb55ac0c8c750d16b2332db45cd6b9188731fff546db2ca13ee6228b020b2227 type=CONTAINER_CREATED_EVENT May 13 02:27:47.335424 containerd[1480]: time="2025-05-13T02:27:47.335397740Z" level=warning msg="container event discarded" container=eb55ac0c8c750d16b2332db45cd6b9188731fff546db2ca13ee6228b020b2227 type=CONTAINER_STARTED_EVENT May 13 02:27:47.355615 containerd[1480]: time="2025-05-13T02:27:47.355352016Z" level=warning msg="container event discarded" container=ed0d45d5b5bee78fe864dd1f0276706e9d6713c0d13ad2f2a12c239336933c1c type=CONTAINER_CREATED_EVENT May 13 02:27:47.355615 containerd[1480]: time="2025-05-13T02:27:47.355548085Z" level=warning msg="container event discarded" container=b27317a2ecd87c314a5c7267346c73e3c18dc588107e43854f37a4edf24cb743 type=CONTAINER_CREATED_EVENT May 13 02:27:47.383058 containerd[1480]: time="2025-05-13T02:27:47.382926083Z" level=warning msg="container event discarded" container=758db6908d3e363af7d390cf92487dfeacb0dcc0021d5c9705d5c08aa2238c10 type=CONTAINER_CREATED_EVENT May 13 02:27:47.476767 containerd[1480]: time="2025-05-13T02:27:47.476557881Z" level=warning msg="container event discarded" container=ed0d45d5b5bee78fe864dd1f0276706e9d6713c0d13ad2f2a12c239336933c1c type=CONTAINER_STARTED_EVENT May 13 02:27:47.501280 containerd[1480]: time="2025-05-13T02:27:47.501049408Z" level=warning msg="container event discarded" container=b27317a2ecd87c314a5c7267346c73e3c18dc588107e43854f37a4edf24cb743 type=CONTAINER_STARTED_EVENT May 13 02:27:47.501280 containerd[1480]: time="2025-05-13T02:27:47.501133646Z" level=warning msg="container event discarded" container=758db6908d3e363af7d390cf92487dfeacb0dcc0021d5c9705d5c08aa2238c10 type=CONTAINER_STARTED_EVENT May 13 02:27:52.299082 systemd[1]: Started sshd@15-172.24.4.210:22-172.24.4.1:37856.service - OpenSSH per-connection server daemon (172.24.4.1:37856). May 13 02:27:53.398307 sshd[4237]: Accepted publickey for core from 172.24.4.1 port 37856 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:27:53.401841 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:27:53.415978 systemd-logind[1455]: New session 18 of user core. May 13 02:27:53.425857 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 02:27:54.215237 sshd[4239]: Connection closed by 172.24.4.1 port 37856 May 13 02:27:54.217647 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 13 02:27:54.236970 systemd[1]: sshd@15-172.24.4.210:22-172.24.4.1:37856.service: Deactivated successfully. May 13 02:27:54.243259 systemd[1]: session-18.scope: Deactivated successfully. May 13 02:27:54.246504 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. May 13 02:27:54.251644 systemd-logind[1455]: Removed session 18. May 13 02:27:54.255733 systemd[1]: Started sshd@16-172.24.4.210:22-172.24.4.1:55034.service - OpenSSH per-connection server daemon (172.24.4.1:55034). May 13 02:27:55.390864 sshd[4249]: Accepted publickey for core from 172.24.4.1 port 55034 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:27:55.394408 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:27:55.406252 systemd-logind[1455]: New session 19 of user core. May 13 02:27:55.414840 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 02:27:56.255718 sshd[4252]: Connection closed by 172.24.4.1 port 55034 May 13 02:27:56.257396 sshd-session[4249]: pam_unix(sshd:session): session closed for user core May 13 02:27:56.276337 systemd[1]: sshd@16-172.24.4.210:22-172.24.4.1:55034.service: Deactivated successfully. May 13 02:27:56.284918 systemd[1]: session-19.scope: Deactivated successfully. May 13 02:27:56.290904 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. May 13 02:27:56.296363 systemd[1]: Started sshd@17-172.24.4.210:22-172.24.4.1:55040.service - OpenSSH per-connection server daemon (172.24.4.1:55040). May 13 02:27:56.301214 systemd-logind[1455]: Removed session 19. May 13 02:27:57.465738 sshd[4261]: Accepted publickey for core from 172.24.4.1 port 55040 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:27:57.470810 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:27:57.485939 systemd-logind[1455]: New session 20 of user core. May 13 02:27:57.497098 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 02:28:00.633511 sshd[4264]: Connection closed by 172.24.4.1 port 55040 May 13 02:28:00.635866 sshd-session[4261]: pam_unix(sshd:session): session closed for user core May 13 02:28:00.660794 systemd[1]: sshd@17-172.24.4.210:22-172.24.4.1:55040.service: Deactivated successfully. May 13 02:28:00.667336 systemd[1]: session-20.scope: Deactivated successfully. May 13 02:28:00.671002 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. May 13 02:28:00.675244 systemd[1]: Started sshd@18-172.24.4.210:22-172.24.4.1:55046.service - OpenSSH per-connection server daemon (172.24.4.1:55046). May 13 02:28:00.677543 systemd-logind[1455]: Removed session 20. May 13 02:28:01.959604 sshd[4280]: Accepted publickey for core from 172.24.4.1 port 55046 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:28:01.964254 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:28:01.980932 systemd-logind[1455]: New session 21 of user core. May 13 02:28:01.990007 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 02:28:03.108829 sshd[4283]: Connection closed by 172.24.4.1 port 55046 May 13 02:28:03.107885 sshd-session[4280]: pam_unix(sshd:session): session closed for user core May 13 02:28:03.137013 systemd[1]: sshd@18-172.24.4.210:22-172.24.4.1:55046.service: Deactivated successfully. May 13 02:28:03.144793 systemd[1]: session-21.scope: Deactivated successfully. May 13 02:28:03.149637 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. May 13 02:28:03.155911 systemd[1]: Started sshd@19-172.24.4.210:22-172.24.4.1:55062.service - OpenSSH per-connection server daemon (172.24.4.1:55062). May 13 02:28:03.160268 systemd-logind[1455]: Removed session 21. May 13 02:28:04.324326 sshd[4292]: Accepted publickey for core from 172.24.4.1 port 55062 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:28:04.326597 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:28:04.339294 systemd-logind[1455]: New session 22 of user core. May 13 02:28:04.349775 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 02:28:05.056887 sshd[4295]: Connection closed by 172.24.4.1 port 55062 May 13 02:28:05.058293 sshd-session[4292]: pam_unix(sshd:session): session closed for user core May 13 02:28:05.065412 systemd[1]: sshd@19-172.24.4.210:22-172.24.4.1:55062.service: Deactivated successfully. May 13 02:28:05.072222 systemd[1]: session-22.scope: Deactivated successfully. May 13 02:28:05.083417 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. May 13 02:28:05.086550 systemd-logind[1455]: Removed session 22. May 13 02:28:07.822914 containerd[1480]: time="2025-05-13T02:28:07.821649180Z" level=warning msg="container event discarded" container=f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621 type=CONTAINER_CREATED_EVENT May 13 02:28:07.822914 containerd[1480]: time="2025-05-13T02:28:07.822792086Z" level=warning msg="container event discarded" container=f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621 type=CONTAINER_STARTED_EVENT May 13 02:28:07.927099 containerd[1480]: time="2025-05-13T02:28:07.926992978Z" level=warning msg="container event discarded" container=d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367 type=CONTAINER_CREATED_EVENT May 13 02:28:07.927099 containerd[1480]: time="2025-05-13T02:28:07.927082836Z" level=warning msg="container event discarded" container=d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367 type=CONTAINER_STARTED_EVENT May 13 02:28:08.412831 containerd[1480]: time="2025-05-13T02:28:08.412658891Z" level=warning msg="container event discarded" container=1971934326f606970075c583184b4c60cabfab5b67f17d592701eeb957484916 type=CONTAINER_CREATED_EVENT May 13 02:28:08.412831 containerd[1480]: time="2025-05-13T02:28:08.412749921Z" level=warning msg="container event discarded" container=1971934326f606970075c583184b4c60cabfab5b67f17d592701eeb957484916 type=CONTAINER_STARTED_EVENT May 13 02:28:08.443083 containerd[1480]: time="2025-05-13T02:28:08.442994368Z" level=warning msg="container event discarded" container=610ffd82e987e834b79cfd01c647dc12a94d371fb0825ec04e4e275704029832 type=CONTAINER_CREATED_EVENT May 13 02:28:08.511528 containerd[1480]: time="2025-05-13T02:28:08.511311803Z" level=warning msg="container event discarded" container=610ffd82e987e834b79cfd01c647dc12a94d371fb0825ec04e4e275704029832 type=CONTAINER_STARTED_EVENT May 13 02:28:10.090105 systemd[1]: Started sshd@20-172.24.4.210:22-172.24.4.1:49774.service - OpenSSH per-connection server daemon (172.24.4.1:49774). May 13 02:28:11.237186 sshd[4312]: Accepted publickey for core from 172.24.4.1 port 49774 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:28:11.240227 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:28:11.249590 systemd-logind[1455]: New session 23 of user core. May 13 02:28:11.261716 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 02:28:12.107989 sshd[4314]: Connection closed by 172.24.4.1 port 49774 May 13 02:28:12.109855 sshd-session[4312]: pam_unix(sshd:session): session closed for user core May 13 02:28:12.121159 systemd[1]: sshd@20-172.24.4.210:22-172.24.4.1:49774.service: Deactivated successfully. May 13 02:28:12.130682 systemd[1]: session-23.scope: Deactivated successfully. May 13 02:28:12.134217 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. May 13 02:28:12.138905 systemd-logind[1455]: Removed session 23. May 13 02:28:15.232795 containerd[1480]: time="2025-05-13T02:28:15.232624780Z" level=warning msg="container event discarded" container=9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c type=CONTAINER_CREATED_EVENT May 13 02:28:15.303047 containerd[1480]: time="2025-05-13T02:28:15.302703990Z" level=warning msg="container event discarded" container=9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c type=CONTAINER_STARTED_EVENT May 13 02:28:16.582006 containerd[1480]: time="2025-05-13T02:28:16.581798691Z" level=warning msg="container event discarded" container=9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c type=CONTAINER_STOPPED_EVENT May 13 02:28:17.133546 systemd[1]: Started sshd@21-172.24.4.210:22-172.24.4.1:34788.service - OpenSSH per-connection server daemon (172.24.4.1:34788). May 13 02:28:17.462385 containerd[1480]: time="2025-05-13T02:28:17.461810749Z" level=warning msg="container event discarded" container=878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5 type=CONTAINER_CREATED_EVENT May 13 02:28:17.555706 containerd[1480]: time="2025-05-13T02:28:17.555566821Z" level=warning msg="container event discarded" container=878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5 type=CONTAINER_STARTED_EVENT May 13 02:28:17.625144 containerd[1480]: time="2025-05-13T02:28:17.624983417Z" level=warning msg="container event discarded" container=878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5 type=CONTAINER_STOPPED_EVENT May 13 02:28:18.332502 sshd[4325]: Accepted publickey for core from 172.24.4.1 port 34788 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:28:18.335994 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:28:18.347968 systemd-logind[1455]: New session 24 of user core. May 13 02:28:18.359545 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 02:28:18.469175 containerd[1480]: time="2025-05-13T02:28:18.469019702Z" level=warning msg="container event discarded" container=1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7 type=CONTAINER_CREATED_EVENT May 13 02:28:18.594305 containerd[1480]: time="2025-05-13T02:28:18.593839926Z" level=warning msg="container event discarded" container=1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7 type=CONTAINER_STARTED_EVENT May 13 02:28:18.846921 containerd[1480]: time="2025-05-13T02:28:18.846606817Z" level=warning msg="container event discarded" container=1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7 type=CONTAINER_STOPPED_EVENT May 13 02:28:19.045238 containerd[1480]: time="2025-05-13T02:28:19.045171134Z" level=warning msg="container event discarded" container=fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a type=CONTAINER_CREATED_EVENT May 13 02:28:19.103545 containerd[1480]: time="2025-05-13T02:28:19.102588144Z" level=warning msg="container event discarded" container=fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a type=CONTAINER_STARTED_EVENT May 13 02:28:19.246381 sshd[4327]: Connection closed by 172.24.4.1 port 34788 May 13 02:28:19.246397 sshd-session[4325]: pam_unix(sshd:session): session closed for user core May 13 02:28:19.257999 systemd[1]: sshd@21-172.24.4.210:22-172.24.4.1:34788.service: Deactivated successfully. May 13 02:28:19.260992 systemd[1]: session-24.scope: Deactivated successfully. May 13 02:28:19.263421 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. May 13 02:28:19.265707 systemd-logind[1455]: Removed session 24. May 13 02:28:19.468384 containerd[1480]: time="2025-05-13T02:28:19.467259665Z" level=warning msg="container event discarded" container=d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2 type=CONTAINER_CREATED_EVENT May 13 02:28:19.572875 containerd[1480]: time="2025-05-13T02:28:19.572749199Z" level=warning msg="container event discarded" container=d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2 type=CONTAINER_STARTED_EVENT May 13 02:28:19.696585 containerd[1480]: time="2025-05-13T02:28:19.696284029Z" level=warning msg="container event discarded" container=d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2 type=CONTAINER_STOPPED_EVENT May 13 02:28:20.517187 containerd[1480]: time="2025-05-13T02:28:20.517026753Z" level=warning msg="container event discarded" container=3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511 type=CONTAINER_CREATED_EVENT May 13 02:28:20.605657 containerd[1480]: time="2025-05-13T02:28:20.605512566Z" level=warning msg="container event discarded" container=3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511 type=CONTAINER_STARTED_EVENT May 13 02:28:24.287179 systemd[1]: Started sshd@22-172.24.4.210:22-172.24.4.1:50628.service - OpenSSH per-connection server daemon (172.24.4.1:50628). May 13 02:28:25.555582 sshd[4339]: Accepted publickey for core from 172.24.4.1 port 50628 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:28:25.558830 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:28:25.578915 systemd-logind[1455]: New session 25 of user core. May 13 02:28:25.584971 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 02:28:26.290765 sshd[4341]: Connection closed by 172.24.4.1 port 50628 May 13 02:28:26.288432 sshd-session[4339]: pam_unix(sshd:session): session closed for user core May 13 02:28:26.309616 systemd[1]: sshd@22-172.24.4.210:22-172.24.4.1:50628.service: Deactivated successfully. May 13 02:28:26.315445 systemd[1]: session-25.scope: Deactivated successfully. May 13 02:28:26.319204 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. May 13 02:28:26.325316 systemd[1]: Started sshd@23-172.24.4.210:22-172.24.4.1:50642.service - OpenSSH per-connection server daemon (172.24.4.1:50642). May 13 02:28:26.330231 systemd-logind[1455]: Removed session 25. May 13 02:28:27.639546 sshd[4351]: Accepted publickey for core from 172.24.4.1 port 50642 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:28:27.641920 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:28:27.661430 systemd-logind[1455]: New session 26 of user core. May 13 02:28:27.673843 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 02:28:28.790324 containerd[1480]: time="2025-05-13T02:28:28.790180894Z" level=warning msg="container event discarded" container=2f2346d8860fbc791b0d6578370af52436aee5aeaade5b0f8fee9fd3aca43460 type=CONTAINER_CREATED_EVENT May 13 02:28:28.790324 containerd[1480]: time="2025-05-13T02:28:28.790289888Z" level=warning msg="container event discarded" container=2f2346d8860fbc791b0d6578370af52436aee5aeaade5b0f8fee9fd3aca43460 type=CONTAINER_STARTED_EVENT May 13 02:28:28.830687 containerd[1480]: time="2025-05-13T02:28:28.830646140Z" level=warning msg="container event discarded" container=b908df277e01a0923bce0317acc894197621ee91eaf9f6a0a308a1b35fc44947 type=CONTAINER_CREATED_EVENT May 13 02:28:28.898945 containerd[1480]: time="2025-05-13T02:28:28.898866194Z" level=warning msg="container event discarded" container=b7bee0492a8c2b749359ce0d49c04567aa99c4cf42336b99e48d1c5f6d5206b1 type=CONTAINER_CREATED_EVENT May 13 02:28:28.898945 containerd[1480]: time="2025-05-13T02:28:28.898907482Z" level=warning msg="container event discarded" container=b7bee0492a8c2b749359ce0d49c04567aa99c4cf42336b99e48d1c5f6d5206b1 type=CONTAINER_STARTED_EVENT May 13 02:28:28.923374 containerd[1480]: time="2025-05-13T02:28:28.923255412Z" level=warning msg="container event discarded" container=b908df277e01a0923bce0317acc894197621ee91eaf9f6a0a308a1b35fc44947 type=CONTAINER_STARTED_EVENT May 13 02:28:28.934685 containerd[1480]: time="2025-05-13T02:28:28.934613430Z" level=warning msg="container event discarded" container=999ac4e5285e0a2b9b3e9cf48cc98c1bf5523cd2e3a8303392c42d63b32a6945 type=CONTAINER_CREATED_EVENT May 13 02:28:29.006045 containerd[1480]: time="2025-05-13T02:28:29.005940119Z" level=warning msg="container event discarded" container=999ac4e5285e0a2b9b3e9cf48cc98c1bf5523cd2e3a8303392c42d63b32a6945 type=CONTAINER_STARTED_EVENT May 13 02:28:30.145447 containerd[1480]: time="2025-05-13T02:28:30.144264972Z" level=info msg="StopContainer for \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" with timeout 30 (s)" May 13 02:28:30.151098 containerd[1480]: time="2025-05-13T02:28:30.151043160Z" level=info msg="Stop container \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" with signal terminated" May 13 02:28:30.178331 systemd[1]: cri-containerd-fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a.scope: Deactivated successfully. May 13 02:28:30.180638 systemd[1]: cri-containerd-fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a.scope: Consumed 1.451s CPU time, 29.3M memory peak, 4K written to disk. May 13 02:28:30.187277 containerd[1480]: time="2025-05-13T02:28:30.186952351Z" level=info msg="received exit event container_id:\"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" id:\"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" pid:3364 exited_at:{seconds:1747103310 nanos:185630138}" May 13 02:28:30.188472 containerd[1480]: time="2025-05-13T02:28:30.188418514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" id:\"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" pid:3364 exited_at:{seconds:1747103310 nanos:185630138}" May 13 02:28:30.220430 containerd[1480]: time="2025-05-13T02:28:30.220366145Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 02:28:30.234061 containerd[1480]: time="2025-05-13T02:28:30.233544612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" id:\"f54f3c9b6f0fd03b9dff4bc88196a77db5ebf0ac6bc221f8f254f0bb15b06010\" pid:4381 exited_at:{seconds:1747103310 nanos:232719813}" May 13 02:28:30.239638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a-rootfs.mount: Deactivated successfully. May 13 02:28:30.243741 containerd[1480]: time="2025-05-13T02:28:30.243533278Z" level=info msg="StopContainer for \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" with timeout 2 (s)" May 13 02:28:30.244904 containerd[1480]: time="2025-05-13T02:28:30.244863987Z" level=info msg="Stop container \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" with signal terminated" May 13 02:28:30.273562 systemd-networkd[1385]: lxc_health: Link DOWN May 13 02:28:30.273571 systemd-networkd[1385]: lxc_health: Lost carrier May 13 02:28:30.292259 containerd[1480]: time="2025-05-13T02:28:30.290349159Z" level=info msg="StopContainer for \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" returns successfully" May 13 02:28:30.291038 systemd[1]: cri-containerd-3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511.scope: Deactivated successfully. May 13 02:28:30.291334 systemd[1]: cri-containerd-3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511.scope: Consumed 10.411s CPU time, 124.1M memory peak, 128K read from disk, 13.3M written to disk. May 13 02:28:30.294131 containerd[1480]: time="2025-05-13T02:28:30.293642514Z" level=info msg="StopPodSandbox for \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\"" May 13 02:28:30.294131 containerd[1480]: time="2025-05-13T02:28:30.294045310Z" level=info msg="Container to stop \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 02:28:30.301435 containerd[1480]: time="2025-05-13T02:28:30.301340399Z" level=info msg="received exit event container_id:\"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" id:\"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" pid:3433 exited_at:{seconds:1747103310 nanos:300703072}" May 13 02:28:30.301975 containerd[1480]: time="2025-05-13T02:28:30.301886895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" id:\"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" pid:3433 exited_at:{seconds:1747103310 nanos:300703072}" May 13 02:28:30.334048 systemd[1]: cri-containerd-d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367.scope: Deactivated successfully. May 13 02:28:30.339163 containerd[1480]: time="2025-05-13T02:28:30.339020684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" id:\"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" pid:2963 exit_status:137 exited_at:{seconds:1747103310 nanos:338329357}" May 13 02:28:30.352948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511-rootfs.mount: Deactivated successfully. May 13 02:28:30.406802 containerd[1480]: time="2025-05-13T02:28:30.406682219Z" level=info msg="StopContainer for \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" returns successfully" May 13 02:28:30.408304 containerd[1480]: time="2025-05-13T02:28:30.408165936Z" level=info msg="StopPodSandbox for \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\"" May 13 02:28:30.408672 containerd[1480]: time="2025-05-13T02:28:30.408530280Z" level=info msg="Container to stop \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 02:28:30.408884 containerd[1480]: time="2025-05-13T02:28:30.408865209Z" level=info msg="Container to stop \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 02:28:30.409000 containerd[1480]: time="2025-05-13T02:28:30.408982960Z" level=info msg="Container to stop \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 02:28:30.409137 containerd[1480]: time="2025-05-13T02:28:30.409118174Z" level=info msg="Container to stop \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 02:28:30.409353 containerd[1480]: time="2025-05-13T02:28:30.409333359Z" level=info msg="Container to stop \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 02:28:30.416867 systemd[1]: cri-containerd-f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621.scope: Deactivated successfully. May 13 02:28:30.425435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367-rootfs.mount: Deactivated successfully. May 13 02:28:30.454625 containerd[1480]: time="2025-05-13T02:28:30.451219051Z" level=info msg="shim disconnected" id=d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367 namespace=k8s.io May 13 02:28:30.454625 containerd[1480]: time="2025-05-13T02:28:30.451549341Z" level=warning msg="cleaning up after shim disconnected" id=d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367 namespace=k8s.io May 13 02:28:30.454625 containerd[1480]: time="2025-05-13T02:28:30.451563237Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 02:28:30.456118 containerd[1480]: time="2025-05-13T02:28:30.455851440Z" level=info msg="shim disconnected" id=f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621 namespace=k8s.io May 13 02:28:30.456118 containerd[1480]: time="2025-05-13T02:28:30.455876367Z" level=warning msg="cleaning up after shim disconnected" id=f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621 namespace=k8s.io May 13 02:28:30.456118 containerd[1480]: time="2025-05-13T02:28:30.455884923Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 02:28:30.457640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621-rootfs.mount: Deactivated successfully. May 13 02:28:30.491405 containerd[1480]: time="2025-05-13T02:28:30.487395884Z" level=info msg="received exit event sandbox_id:\"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" exit_status:137 exited_at:{seconds:1747103310 nanos:425517809}" May 13 02:28:30.491405 containerd[1480]: time="2025-05-13T02:28:30.487711837Z" level=info msg="TearDown network for sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" successfully" May 13 02:28:30.491405 containerd[1480]: time="2025-05-13T02:28:30.487730973Z" level=info msg="StopPodSandbox for \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" returns successfully" May 13 02:28:30.492559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621-shm.mount: Deactivated successfully. May 13 02:28:30.499759 containerd[1480]: time="2025-05-13T02:28:30.499724044Z" level=info msg="received exit event sandbox_id:\"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" exit_status:137 exited_at:{seconds:1747103310 nanos:338329357}" May 13 02:28:30.500242 containerd[1480]: time="2025-05-13T02:28:30.500214154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" id:\"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" pid:2917 exit_status:137 exited_at:{seconds:1747103310 nanos:425517809}" May 13 02:28:30.501890 containerd[1480]: time="2025-05-13T02:28:30.501714752Z" level=info msg="TearDown network for sandbox \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" successfully" May 13 02:28:30.501890 containerd[1480]: time="2025-05-13T02:28:30.501755148Z" level=info msg="StopPodSandbox for \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" returns successfully" May 13 02:28:30.577668 kubelet[2813]: I0513 02:28:30.577550 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-config-path\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.577668 kubelet[2813]: I0513 02:28:30.577683 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-hostproc\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.578425 kubelet[2813]: I0513 02:28:30.577732 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-host-proc-sys-net\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.578425 kubelet[2813]: I0513 02:28:30.577775 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-host-proc-sys-kernel\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.578425 kubelet[2813]: I0513 02:28:30.577832 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-bpf-maps\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.578425 kubelet[2813]: I0513 02:28:30.577876 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-lib-modules\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.578425 kubelet[2813]: I0513 02:28:30.577944 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvhh\" (UniqueName: \"kubernetes.io/projected/33792f8b-cfcf-44d7-8a93-779a7b8a6b46-kube-api-access-8zvhh\") pod \"33792f8b-cfcf-44d7-8a93-779a7b8a6b46\" (UID: \"33792f8b-cfcf-44d7-8a93-779a7b8a6b46\") " May 13 02:28:30.578425 kubelet[2813]: I0513 02:28:30.578037 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33792f8b-cfcf-44d7-8a93-779a7b8a6b46-cilium-config-path\") pod \"33792f8b-cfcf-44d7-8a93-779a7b8a6b46\" (UID: \"33792f8b-cfcf-44d7-8a93-779a7b8a6b46\") " May 13 02:28:30.580626 kubelet[2813]: I0513 02:28:30.578122 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-cgroup\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.580626 kubelet[2813]: I0513 02:28:30.578224 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-run\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.580626 kubelet[2813]: I0513 02:28:30.578286 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cni-path\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.580626 kubelet[2813]: I0513 02:28:30.578360 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-clustermesh-secrets\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.580626 kubelet[2813]: I0513 02:28:30.578427 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-etc-cni-netd\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.580626 kubelet[2813]: I0513 02:28:30.578657 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-hubble-tls\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.580882 kubelet[2813]: I0513 02:28:30.579094 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-xtables-lock\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.580882 kubelet[2813]: I0513 02:28:30.579282 2813 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw89d\" (UniqueName: \"kubernetes.io/projected/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-kube-api-access-pw89d\") pod \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\" (UID: \"a36d1900-2e43-45d8-8b83-bb11ec8f4b4f\") " May 13 02:28:30.580882 kubelet[2813]: I0513 02:28:30.579990 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-hostproc" (OuterVolumeSpecName: "hostproc") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.580882 kubelet[2813]: I0513 02:28:30.580251 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.580882 kubelet[2813]: I0513 02:28:30.580347 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.581105 kubelet[2813]: I0513 02:28:30.580926 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.581212 kubelet[2813]: I0513 02:28:30.581133 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.582538 kubelet[2813]: I0513 02:28:30.582505 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.585771 kubelet[2813]: I0513 02:28:30.582689 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.585902 kubelet[2813]: I0513 02:28:30.582718 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cni-path" (OuterVolumeSpecName: "cni-path") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.586640 kubelet[2813]: I0513 02:28:30.584922 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.586729 kubelet[2813]: I0513 02:28:30.586068 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 02:28:30.587359 kubelet[2813]: I0513 02:28:30.587332 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-kube-api-access-pw89d" (OuterVolumeSpecName: "kube-api-access-pw89d") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "kube-api-access-pw89d". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 02:28:30.589355 kubelet[2813]: I0513 02:28:30.589329 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 02:28:30.594334 kubelet[2813]: I0513 02:28:30.594287 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33792f8b-cfcf-44d7-8a93-779a7b8a6b46-kube-api-access-8zvhh" (OuterVolumeSpecName: "kube-api-access-8zvhh") pod "33792f8b-cfcf-44d7-8a93-779a7b8a6b46" (UID: "33792f8b-cfcf-44d7-8a93-779a7b8a6b46"). InnerVolumeSpecName "kube-api-access-8zvhh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 02:28:30.599477 kubelet[2813]: I0513 02:28:30.599391 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 02:28:30.606069 kubelet[2813]: I0513 02:28:30.606009 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" (UID: "a36d1900-2e43-45d8-8b83-bb11ec8f4b4f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 02:28:30.607080 kubelet[2813]: I0513 02:28:30.607043 2813 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33792f8b-cfcf-44d7-8a93-779a7b8a6b46-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "33792f8b-cfcf-44d7-8a93-779a7b8a6b46" (UID: "33792f8b-cfcf-44d7-8a93-779a7b8a6b46"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 02:28:30.680762 kubelet[2813]: I0513 02:28:30.680651 2813 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pw89d\" (UniqueName: \"kubernetes.io/projected/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-kube-api-access-pw89d\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.680762 kubelet[2813]: I0513 02:28:30.680719 2813 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-config-path\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.681780 kubelet[2813]: I0513 02:28:30.681752 2813 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-hostproc\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682233 kubelet[2813]: I0513 02:28:30.681877 2813 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-host-proc-sys-net\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682233 kubelet[2813]: I0513 02:28:30.681896 2813 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-host-proc-sys-kernel\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682233 kubelet[2813]: I0513 02:28:30.681933 2813 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-bpf-maps\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682233 kubelet[2813]: I0513 02:28:30.681959 2813 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-lib-modules\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682233 kubelet[2813]: I0513 02:28:30.681977 2813 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8zvhh\" (UniqueName: \"kubernetes.io/projected/33792f8b-cfcf-44d7-8a93-779a7b8a6b46-kube-api-access-8zvhh\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682233 kubelet[2813]: I0513 02:28:30.681988 2813 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33792f8b-cfcf-44d7-8a93-779a7b8a6b46-cilium-config-path\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682233 kubelet[2813]: I0513 02:28:30.682001 2813 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-cgroup\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682722 kubelet[2813]: I0513 02:28:30.682017 2813 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cilium-run\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682722 kubelet[2813]: I0513 02:28:30.682032 2813 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-cni-path\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682722 kubelet[2813]: I0513 02:28:30.682042 2813 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-clustermesh-secrets\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682722 kubelet[2813]: I0513 02:28:30.682090 2813 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-etc-cni-netd\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682722 kubelet[2813]: I0513 02:28:30.682101 2813 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-hubble-tls\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.682722 kubelet[2813]: I0513 02:28:30.682117 2813 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f-xtables-lock\") on node \"ci-4284-0-0-n-0dbb4c7115.novalocal\" DevicePath \"\"" May 13 02:28:30.774481 kubelet[2813]: I0513 02:28:30.773681 2813 scope.go:117] "RemoveContainer" containerID="fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a" May 13 02:28:30.780803 systemd[1]: Removed slice kubepods-besteffort-pod33792f8b_cfcf_44d7_8a93_779a7b8a6b46.slice - libcontainer container kubepods-besteffort-pod33792f8b_cfcf_44d7_8a93_779a7b8a6b46.slice. May 13 02:28:30.781125 systemd[1]: kubepods-besteffort-pod33792f8b_cfcf_44d7_8a93_779a7b8a6b46.slice: Consumed 1.480s CPU time, 29.5M memory peak, 4K written to disk. May 13 02:28:30.781519 containerd[1480]: time="2025-05-13T02:28:30.781200220Z" level=info msg="RemoveContainer for \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\"" May 13 02:28:30.808470 containerd[1480]: time="2025-05-13T02:28:30.808345394Z" level=info msg="RemoveContainer for \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" returns successfully" May 13 02:28:30.809783 kubelet[2813]: I0513 02:28:30.809628 2813 scope.go:117] "RemoveContainer" containerID="fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a" May 13 02:28:30.810640 containerd[1480]: time="2025-05-13T02:28:30.810167706Z" level=error msg="ContainerStatus for \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\": not found" May 13 02:28:30.812273 kubelet[2813]: E0513 02:28:30.812176 2813 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\": not found" containerID="fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a" May 13 02:28:30.813185 kubelet[2813]: I0513 02:28:30.812317 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a"} err="failed to get container status \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc8424d7cbbca17285bfc83019671590892d6ff4b30e309c98471556c763ea3a\": not found" May 13 02:28:30.813185 kubelet[2813]: I0513 02:28:30.812835 2813 scope.go:117] "RemoveContainer" containerID="3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511" May 13 02:28:30.820653 containerd[1480]: time="2025-05-13T02:28:30.820147205Z" level=info msg="RemoveContainer for \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\"" May 13 02:28:30.821417 systemd[1]: Removed slice kubepods-burstable-poda36d1900_2e43_45d8_8b83_bb11ec8f4b4f.slice - libcontainer container kubepods-burstable-poda36d1900_2e43_45d8_8b83_bb11ec8f4b4f.slice. May 13 02:28:30.821565 systemd[1]: kubepods-burstable-poda36d1900_2e43_45d8_8b83_bb11ec8f4b4f.slice: Consumed 10.509s CPU time, 124.6M memory peak, 128K read from disk, 13.3M written to disk. May 13 02:28:30.831025 containerd[1480]: time="2025-05-13T02:28:30.830903574Z" level=info msg="RemoveContainer for \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" returns successfully" May 13 02:28:30.831430 kubelet[2813]: I0513 02:28:30.831383 2813 scope.go:117] "RemoveContainer" containerID="d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2" May 13 02:28:30.841144 containerd[1480]: time="2025-05-13T02:28:30.841061638Z" level=info msg="RemoveContainer for \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\"" May 13 02:28:30.851435 containerd[1480]: time="2025-05-13T02:28:30.851366358Z" level=info msg="RemoveContainer for \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\" returns successfully" May 13 02:28:30.852763 kubelet[2813]: I0513 02:28:30.851911 2813 scope.go:117] "RemoveContainer" containerID="1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7" May 13 02:28:30.858140 containerd[1480]: time="2025-05-13T02:28:30.858103400Z" level=info msg="RemoveContainer for \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\"" May 13 02:28:30.865504 containerd[1480]: time="2025-05-13T02:28:30.865430248Z" level=info msg="RemoveContainer for \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\" returns successfully" May 13 02:28:30.865905 kubelet[2813]: I0513 02:28:30.865823 2813 scope.go:117] "RemoveContainer" containerID="878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5" May 13 02:28:30.868504 containerd[1480]: time="2025-05-13T02:28:30.867815196Z" level=info msg="RemoveContainer for \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\"" May 13 02:28:30.872344 containerd[1480]: time="2025-05-13T02:28:30.872307381Z" level=info msg="RemoveContainer for \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\" returns successfully" May 13 02:28:30.872683 kubelet[2813]: I0513 02:28:30.872585 2813 scope.go:117] "RemoveContainer" containerID="9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c" May 13 02:28:30.874957 containerd[1480]: time="2025-05-13T02:28:30.874931269Z" level=info msg="RemoveContainer for \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\"" May 13 02:28:30.878970 containerd[1480]: time="2025-05-13T02:28:30.878931591Z" level=info msg="RemoveContainer for \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\" returns successfully" May 13 02:28:30.879275 kubelet[2813]: I0513 02:28:30.879238 2813 scope.go:117] "RemoveContainer" containerID="3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511" May 13 02:28:30.879566 containerd[1480]: time="2025-05-13T02:28:30.879530665Z" level=error msg="ContainerStatus for \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\": not found" May 13 02:28:30.879883 kubelet[2813]: E0513 02:28:30.879674 2813 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\": not found" containerID="3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511" May 13 02:28:30.879883 kubelet[2813]: I0513 02:28:30.879706 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511"} err="failed to get container status \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\": rpc error: code = NotFound desc = an error occurred when try to find container \"3424a9b8564681e2a3c8aea4b1c1f625d40e47fc2dd81a12b44152b2cd6a8511\": not found" May 13 02:28:30.879883 kubelet[2813]: I0513 02:28:30.879733 2813 scope.go:117] "RemoveContainer" containerID="d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2" May 13 02:28:30.880153 kubelet[2813]: E0513 02:28:30.880114 2813 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\": not found" containerID="d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2" May 13 02:28:30.880200 containerd[1480]: time="2025-05-13T02:28:30.879996109Z" level=error msg="ContainerStatus for \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\": not found" May 13 02:28:30.880599 kubelet[2813]: I0513 02:28:30.880150 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2"} err="failed to get container status \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5bdd6f806242dd5fdb8ee09ef5ead8182ca638b55d0d8cfac1df0b34661d5b2\": not found" May 13 02:28:30.880599 kubelet[2813]: I0513 02:28:30.880172 2813 scope.go:117] "RemoveContainer" containerID="1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7" May 13 02:28:30.880599 kubelet[2813]: E0513 02:28:30.880533 2813 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\": not found" containerID="1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7" May 13 02:28:30.880748 containerd[1480]: time="2025-05-13T02:28:30.880321020Z" level=error msg="ContainerStatus for \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\": not found" May 13 02:28:30.881063 kubelet[2813]: I0513 02:28:30.880852 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7"} err="failed to get container status \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f4a482ada3525efe3ad35de3604d357922799e89a573ec7b64a8b912b96fae7\": not found" May 13 02:28:30.881063 kubelet[2813]: I0513 02:28:30.880906 2813 scope.go:117] "RemoveContainer" containerID="878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5" May 13 02:28:30.881359 containerd[1480]: time="2025-05-13T02:28:30.881324093Z" level=error msg="ContainerStatus for \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\": not found" May 13 02:28:30.881699 kubelet[2813]: E0513 02:28:30.881570 2813 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\": not found" containerID="878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5" May 13 02:28:30.881699 kubelet[2813]: I0513 02:28:30.881624 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5"} err="failed to get container status \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"878f636ba2a4dccdcb38fc598418f67caf2064e7cfb13c3914491b6b1436e0b5\": not found" May 13 02:28:30.881699 kubelet[2813]: I0513 02:28:30.881645 2813 scope.go:117] "RemoveContainer" containerID="9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c" May 13 02:28:30.882705 containerd[1480]: time="2025-05-13T02:28:30.881918019Z" level=error msg="ContainerStatus for \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\": not found" May 13 02:28:30.882759 kubelet[2813]: E0513 02:28:30.882654 2813 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\": not found" containerID="9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c" May 13 02:28:30.882759 kubelet[2813]: I0513 02:28:30.882677 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c"} err="failed to get container status \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ca926ce60be1251ecd7104ddfae580b80f177ec35f606b4086176cd52581d5c\": not found" May 13 02:28:31.240804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367-shm.mount: Deactivated successfully. May 13 02:28:31.241106 systemd[1]: var-lib-kubelet-pods-33792f8b\x2dcfcf\x2d44d7\x2d8a93\x2d779a7b8a6b46-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8zvhh.mount: Deactivated successfully. May 13 02:28:31.241368 systemd[1]: var-lib-kubelet-pods-a36d1900\x2d2e43\x2d45d8\x2d8b83\x2dbb11ec8f4b4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpw89d.mount: Deactivated successfully. May 13 02:28:31.241705 systemd[1]: var-lib-kubelet-pods-a36d1900\x2d2e43\x2d45d8\x2d8b83\x2dbb11ec8f4b4f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 02:28:31.241908 systemd[1]: var-lib-kubelet-pods-a36d1900\x2d2e43\x2d45d8\x2d8b83\x2dbb11ec8f4b4f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 02:28:32.260495 sshd[4354]: Connection closed by 172.24.4.1 port 50642 May 13 02:28:32.260783 sshd-session[4351]: pam_unix(sshd:session): session closed for user core May 13 02:28:32.284536 systemd[1]: sshd@23-172.24.4.210:22-172.24.4.1:50642.service: Deactivated successfully. May 13 02:28:32.289537 kubelet[2813]: I0513 02:28:32.289037 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33792f8b-cfcf-44d7-8a93-779a7b8a6b46" path="/var/lib/kubelet/pods/33792f8b-cfcf-44d7-8a93-779a7b8a6b46/volumes" May 13 02:28:32.292575 kubelet[2813]: I0513 02:28:32.292011 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" path="/var/lib/kubelet/pods/a36d1900-2e43-45d8-8b83-bb11ec8f4b4f/volumes" May 13 02:28:32.293153 systemd[1]: session-26.scope: Deactivated successfully. May 13 02:28:32.293744 systemd[1]: session-26.scope: Consumed 1.428s CPU time, 23.8M memory peak. May 13 02:28:32.295992 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. May 13 02:28:32.303701 systemd[1]: Started sshd@24-172.24.4.210:22-172.24.4.1:50656.service - OpenSSH per-connection server daemon (172.24.4.1:50656). May 13 02:28:32.307393 systemd-logind[1455]: Removed session 26. May 13 02:28:32.518236 kubelet[2813]: E0513 02:28:32.517274 2813 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 02:28:33.488353 sshd[4505]: Accepted publickey for core from 172.24.4.1 port 50656 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:28:33.491581 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:28:33.512315 systemd-logind[1455]: New session 27 of user core. May 13 02:28:33.518890 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 02:28:35.103080 kubelet[2813]: I0513 02:28:35.102947 2813 topology_manager.go:215] "Topology Admit Handler" podUID="2d1ad451-6cea-4f2b-9d22-5c116318bd8b" podNamespace="kube-system" podName="cilium-kjznz" May 13 02:28:35.103643 kubelet[2813]: E0513 02:28:35.103315 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" containerName="mount-cgroup" May 13 02:28:35.103643 kubelet[2813]: E0513 02:28:35.103345 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" containerName="mount-bpf-fs" May 13 02:28:35.103643 kubelet[2813]: E0513 02:28:35.103358 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="33792f8b-cfcf-44d7-8a93-779a7b8a6b46" containerName="cilium-operator" May 13 02:28:35.103643 kubelet[2813]: E0513 02:28:35.103372 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" containerName="apply-sysctl-overwrites" May 13 02:28:35.103643 kubelet[2813]: E0513 02:28:35.103394 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" containerName="clean-cilium-state" May 13 02:28:35.103643 kubelet[2813]: E0513 02:28:35.103407 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" containerName="cilium-agent" May 13 02:28:35.104875 kubelet[2813]: I0513 02:28:35.104831 2813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36d1900-2e43-45d8-8b83-bb11ec8f4b4f" containerName="cilium-agent" May 13 02:28:35.104875 kubelet[2813]: I0513 02:28:35.104868 2813 memory_manager.go:354] "RemoveStaleState removing state" podUID="33792f8b-cfcf-44d7-8a93-779a7b8a6b46" containerName="cilium-operator" May 13 02:28:35.117752 systemd[1]: Created slice kubepods-burstable-pod2d1ad451_6cea_4f2b_9d22_5c116318bd8b.slice - libcontainer container kubepods-burstable-pod2d1ad451_6cea_4f2b_9d22_5c116318bd8b.slice. May 13 02:28:35.218112 kubelet[2813]: I0513 02:28:35.218066 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-host-proc-sys-kernel\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218112 kubelet[2813]: I0513 02:28:35.218119 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-cilium-run\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218350 kubelet[2813]: I0513 02:28:35.218146 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-bpf-maps\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218350 kubelet[2813]: I0513 02:28:35.218172 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-cni-path\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218350 kubelet[2813]: I0513 02:28:35.218224 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-lib-modules\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218350 kubelet[2813]: I0513 02:28:35.218245 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-cilium-config-path\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218350 kubelet[2813]: I0513 02:28:35.218263 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-host-proc-sys-net\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218350 kubelet[2813]: I0513 02:28:35.218280 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-cilium-ipsec-secrets\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218566 kubelet[2813]: I0513 02:28:35.218298 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-hubble-tls\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218566 kubelet[2813]: I0513 02:28:35.218318 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-cilium-cgroup\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218566 kubelet[2813]: I0513 02:28:35.218364 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-hostproc\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218566 kubelet[2813]: I0513 02:28:35.218381 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-etc-cni-netd\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218566 kubelet[2813]: I0513 02:28:35.218399 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvgwq\" (UniqueName: \"kubernetes.io/projected/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-kube-api-access-fvgwq\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218566 kubelet[2813]: I0513 02:28:35.218418 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-xtables-lock\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.218732 kubelet[2813]: I0513 02:28:35.218481 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d1ad451-6cea-4f2b-9d22-5c116318bd8b-clustermesh-secrets\") pod \"cilium-kjznz\" (UID: \"2d1ad451-6cea-4f2b-9d22-5c116318bd8b\") " pod="kube-system/cilium-kjznz" May 13 02:28:35.296512 sshd[4508]: Connection closed by 172.24.4.1 port 50656 May 13 02:28:35.297887 sshd-session[4505]: pam_unix(sshd:session): session closed for user core May 13 02:28:35.322595 systemd[1]: sshd@24-172.24.4.210:22-172.24.4.1:50656.service: Deactivated successfully. May 13 02:28:35.330632 systemd[1]: session-27.scope: Deactivated successfully. May 13 02:28:35.333692 systemd[1]: session-27.scope: Consumed 1.043s CPU time, 23.7M memory peak. May 13 02:28:35.338582 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. May 13 02:28:35.345188 systemd[1]: Started sshd@25-172.24.4.210:22-172.24.4.1:38126.service - OpenSSH per-connection server daemon (172.24.4.1:38126). May 13 02:28:35.355135 systemd-logind[1455]: Removed session 27. May 13 02:28:35.425560 containerd[1480]: time="2025-05-13T02:28:35.425512364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjznz,Uid:2d1ad451-6cea-4f2b-9d22-5c116318bd8b,Namespace:kube-system,Attempt:0,}" May 13 02:28:35.454792 containerd[1480]: time="2025-05-13T02:28:35.454733345Z" level=info msg="connecting to shim aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a" address="unix:///run/containerd/s/485352e4cd85b06462cafaa6cd1b9176a82e2ff0a73326134826df988ca9128d" namespace=k8s.io protocol=ttrpc version=3 May 13 02:28:35.485664 systemd[1]: Started cri-containerd-aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a.scope - libcontainer container aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a. May 13 02:28:35.523446 containerd[1480]: time="2025-05-13T02:28:35.523334924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjznz,Uid:2d1ad451-6cea-4f2b-9d22-5c116318bd8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\"" May 13 02:28:35.533895 containerd[1480]: time="2025-05-13T02:28:35.533811046Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 02:28:35.541850 containerd[1480]: time="2025-05-13T02:28:35.541682397Z" level=info msg="Container 729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f: CDI devices from CRI Config.CDIDevices: []" May 13 02:28:35.552023 containerd[1480]: time="2025-05-13T02:28:35.551900554Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f\"" May 13 02:28:35.554048 containerd[1480]: time="2025-05-13T02:28:35.552825941Z" level=info msg="StartContainer for \"729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f\"" May 13 02:28:35.554048 containerd[1480]: time="2025-05-13T02:28:35.553736120Z" level=info msg="connecting to shim 729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f" address="unix:///run/containerd/s/485352e4cd85b06462cafaa6cd1b9176a82e2ff0a73326134826df988ca9128d" protocol=ttrpc version=3 May 13 02:28:35.582651 systemd[1]: Started cri-containerd-729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f.scope - libcontainer container 729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f. May 13 02:28:35.642494 containerd[1480]: time="2025-05-13T02:28:35.642028324Z" level=info msg="StartContainer for \"729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f\" returns successfully" May 13 02:28:35.667340 systemd[1]: cri-containerd-729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f.scope: Deactivated successfully. May 13 02:28:35.671316 containerd[1480]: time="2025-05-13T02:28:35.671026366Z" level=info msg="received exit event container_id:\"729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f\" id:\"729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f\" pid:4583 exited_at:{seconds:1747103315 nanos:670676749}" May 13 02:28:35.672532 containerd[1480]: time="2025-05-13T02:28:35.671450412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f\" id:\"729cacff6c07f695702cd5c667ad0555b336cb45010654fe1a5a489018d5f66f\" pid:4583 exited_at:{seconds:1747103315 nanos:670676749}" May 13 02:28:35.850565 containerd[1480]: time="2025-05-13T02:28:35.849683184Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 02:28:35.874686 containerd[1480]: time="2025-05-13T02:28:35.874592278Z" level=info msg="Container 98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124: CDI devices from CRI Config.CDIDevices: []" May 13 02:28:35.891302 containerd[1480]: time="2025-05-13T02:28:35.891224129Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124\"" May 13 02:28:35.893100 containerd[1480]: time="2025-05-13T02:28:35.892507238Z" level=info msg="StartContainer for \"98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124\"" May 13 02:28:35.898416 containerd[1480]: time="2025-05-13T02:28:35.898336243Z" level=info msg="connecting to shim 98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124" address="unix:///run/containerd/s/485352e4cd85b06462cafaa6cd1b9176a82e2ff0a73326134826df988ca9128d" protocol=ttrpc version=3 May 13 02:28:35.941259 systemd[1]: Started cri-containerd-98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124.scope - libcontainer container 98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124. May 13 02:28:35.992654 containerd[1480]: time="2025-05-13T02:28:35.992605069Z" level=info msg="StartContainer for \"98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124\" returns successfully" May 13 02:28:36.009998 systemd[1]: cri-containerd-98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124.scope: Deactivated successfully. May 13 02:28:36.012215 containerd[1480]: time="2025-05-13T02:28:36.011214003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124\" id:\"98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124\" pid:4627 exited_at:{seconds:1747103316 nanos:10793323}" May 13 02:28:36.012215 containerd[1480]: time="2025-05-13T02:28:36.011316165Z" level=info msg="received exit event container_id:\"98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124\" id:\"98f600a2c153f1d4d1ac1851ff5b82a67970e86809816c90a9b7940e8498e124\" pid:4627 exited_at:{seconds:1747103316 nanos:10793323}" May 13 02:28:36.354429 sshd[4520]: Accepted publickey for core from 172.24.4.1 port 38126 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:28:36.362036 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:28:36.385631 systemd-logind[1455]: New session 28 of user core. May 13 02:28:36.406901 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 02:28:36.862698 containerd[1480]: time="2025-05-13T02:28:36.860035630Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 02:28:36.895365 containerd[1480]: time="2025-05-13T02:28:36.895271755Z" level=info msg="Container 0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368: CDI devices from CRI Config.CDIDevices: []" May 13 02:28:36.917817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711755181.mount: Deactivated successfully. May 13 02:28:36.929771 containerd[1480]: time="2025-05-13T02:28:36.929725462Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368\"" May 13 02:28:36.930573 containerd[1480]: time="2025-05-13T02:28:36.930532919Z" level=info msg="StartContainer for \"0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368\"" May 13 02:28:36.939491 containerd[1480]: time="2025-05-13T02:28:36.939317985Z" level=info msg="connecting to shim 0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368" address="unix:///run/containerd/s/485352e4cd85b06462cafaa6cd1b9176a82e2ff0a73326134826df988ca9128d" protocol=ttrpc version=3 May 13 02:28:36.959927 sshd[4656]: Connection closed by 172.24.4.1 port 38126 May 13 02:28:36.960194 sshd-session[4520]: pam_unix(sshd:session): session closed for user core May 13 02:28:36.970989 systemd[1]: sshd@25-172.24.4.210:22-172.24.4.1:38126.service: Deactivated successfully. May 13 02:28:36.973125 systemd[1]: session-28.scope: Deactivated successfully. May 13 02:28:36.973998 systemd-logind[1455]: Session 28 logged out. Waiting for processes to exit. May 13 02:28:36.981644 systemd[1]: Started cri-containerd-0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368.scope - libcontainer container 0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368. May 13 02:28:36.984854 systemd[1]: Started sshd@26-172.24.4.210:22-172.24.4.1:38130.service - OpenSSH per-connection server daemon (172.24.4.1:38130). May 13 02:28:36.987270 systemd-logind[1455]: Removed session 28. May 13 02:28:37.057877 systemd[1]: cri-containerd-0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368.scope: Deactivated successfully. May 13 02:28:37.060026 containerd[1480]: time="2025-05-13T02:28:37.059987254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368\" id:\"0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368\" pid:4675 exited_at:{seconds:1747103317 nanos:58757024}" May 13 02:28:37.061062 containerd[1480]: time="2025-05-13T02:28:37.061022999Z" level=info msg="received exit event container_id:\"0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368\" id:\"0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368\" pid:4675 exited_at:{seconds:1747103317 nanos:58757024}" May 13 02:28:37.073082 containerd[1480]: time="2025-05-13T02:28:37.072945086Z" level=info msg="StartContainer for \"0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368\" returns successfully" May 13 02:28:37.096108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c2da791aa10b3f421386a2b8473eb297ca7c316cbea866ef4b6c44bbfc05368-rootfs.mount: Deactivated successfully. May 13 02:28:37.520042 kubelet[2813]: E0513 02:28:37.519930 2813 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 02:28:37.875658 containerd[1480]: time="2025-05-13T02:28:37.875565193Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 02:28:37.898201 containerd[1480]: time="2025-05-13T02:28:37.898084370Z" level=info msg="Container d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e: CDI devices from CRI Config.CDIDevices: []" May 13 02:28:37.924880 containerd[1480]: time="2025-05-13T02:28:37.924818481Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e\"" May 13 02:28:37.925708 containerd[1480]: time="2025-05-13T02:28:37.925645513Z" level=info msg="StartContainer for \"d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e\"" May 13 02:28:37.927018 containerd[1480]: time="2025-05-13T02:28:37.926907883Z" level=info msg="connecting to shim d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e" address="unix:///run/containerd/s/485352e4cd85b06462cafaa6cd1b9176a82e2ff0a73326134826df988ca9128d" protocol=ttrpc version=3 May 13 02:28:37.966619 systemd[1]: Started cri-containerd-d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e.scope - libcontainer container d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e. May 13 02:28:38.010173 systemd[1]: cri-containerd-d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e.scope: Deactivated successfully. May 13 02:28:38.013180 containerd[1480]: time="2025-05-13T02:28:38.013135209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e\" id:\"d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e\" pid:4716 exited_at:{seconds:1747103318 nanos:12591999}" May 13 02:28:38.015576 containerd[1480]: time="2025-05-13T02:28:38.015534324Z" level=info msg="received exit event container_id:\"d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e\" id:\"d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e\" pid:4716 exited_at:{seconds:1747103318 nanos:12591999}" May 13 02:28:38.021482 containerd[1480]: time="2025-05-13T02:28:38.019923436Z" level=info msg="StartContainer for \"d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e\" returns successfully" May 13 02:28:38.055107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5a6bfe1aaf353e63f722165ee74fd4a05a6b7568f950e4a312a741c9306197e-rootfs.mount: Deactivated successfully. May 13 02:28:38.247276 sshd[4673]: Accepted publickey for core from 172.24.4.1 port 38130 ssh2: RSA SHA256:gdNeVpR6HpeSX8AXQuuSvHfNPzQEJIBibBwriZyKT5A May 13 02:28:38.250914 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 02:28:38.265607 systemd-logind[1455]: New session 29 of user core. May 13 02:28:38.279952 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 02:28:38.888425 containerd[1480]: time="2025-05-13T02:28:38.888265330Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 02:28:38.915795 containerd[1480]: time="2025-05-13T02:28:38.915680349Z" level=info msg="Container 832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d: CDI devices from CRI Config.CDIDevices: []" May 13 02:28:38.950304 containerd[1480]: time="2025-05-13T02:28:38.950162388Z" level=info msg="CreateContainer within sandbox \"aaa754cc59d60c3def95c3a09452a6c1890f9bac1f2b5bbbd174c73712f0ee2a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d\"" May 13 02:28:38.954116 containerd[1480]: time="2025-05-13T02:28:38.953630229Z" level=info msg="StartContainer for \"832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d\"" May 13 02:28:38.956798 containerd[1480]: time="2025-05-13T02:28:38.956709001Z" level=info msg="connecting to shim 832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d" address="unix:///run/containerd/s/485352e4cd85b06462cafaa6cd1b9176a82e2ff0a73326134826df988ca9128d" protocol=ttrpc version=3 May 13 02:28:38.987617 systemd[1]: Started cri-containerd-832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d.scope - libcontainer container 832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d. May 13 02:28:39.054184 containerd[1480]: time="2025-05-13T02:28:39.053310496Z" level=info msg="StartContainer for \"832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d\" returns successfully" May 13 02:28:39.190273 containerd[1480]: time="2025-05-13T02:28:39.190164425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d\" id:\"eff4ba9f091b4fb84b55096ef9add381f92f8f7994ff061924ef20f513b877db\" pid:4790 exited_at:{seconds:1747103319 nanos:189781025}" May 13 02:28:39.543032 kernel: cryptd: max_cpu_qlen set to 1000 May 13 02:28:39.598506 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 13 02:28:39.909696 kubelet[2813]: I0513 02:28:39.909356 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kjznz" podStartSLOduration=4.909268352 podStartE2EDuration="4.909268352s" podCreationTimestamp="2025-05-13 02:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 02:28:39.908394992 +0000 UTC m=+347.715512617" watchObservedRunningTime="2025-05-13 02:28:39.909268352 +0000 UTC m=+347.716385987" May 13 02:28:40.717722 kubelet[2813]: I0513 02:28:40.715436 2813 setters.go:580] "Node became not ready" node="ci-4284-0-0-n-0dbb4c7115.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T02:28:40Z","lastTransitionTime":"2025-05-13T02:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 02:28:40.864299 containerd[1480]: time="2025-05-13T02:28:40.864246161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d\" id:\"7810b91b509b1bff0cca4245cf04e0aaec7ec825df3bb05344a7814e61c5cad6\" pid:4932 exit_status:1 exited_at:{seconds:1747103320 nanos:863742135}" May 13 02:28:42.841648 systemd-networkd[1385]: lxc_health: Link UP May 13 02:28:42.842745 systemd-networkd[1385]: lxc_health: Gained carrier May 13 02:28:43.101909 containerd[1480]: time="2025-05-13T02:28:43.101749121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d\" id:\"54285c9ffa45bb79b2f23c847242f29efa034be98a5033f0d61b5aea4fa9270f\" pid:5363 exited_at:{seconds:1747103323 nanos:100227073}" May 13 02:28:44.873996 systemd-networkd[1385]: lxc_health: Gained IPv6LL May 13 02:28:45.340203 containerd[1480]: time="2025-05-13T02:28:45.340152753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d\" id:\"d2faf309b953df47f172e736e824325c4ffc891b394c99b098ba9d07aad54b6d\" pid:5391 exited_at:{seconds:1747103325 nanos:337185862}" May 13 02:28:47.647295 containerd[1480]: time="2025-05-13T02:28:47.646972133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d\" id:\"1b49f27bcdc264cb975e488672f5fb949afcf0bddc7766e83f60fab4a1c7b48c\" pid:5422 exited_at:{seconds:1747103327 nanos:646200655}" May 13 02:28:49.851676 containerd[1480]: time="2025-05-13T02:28:49.851414510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"832f96d664568b373a2559834c7737f14ba9104ffa90679f29e65cef8cbf5f2d\" id:\"dcfaf50f53c97dfa457de4380fb0727be2c78710b568d565dd18e17b517ba8b3\" pid:5462 exited_at:{seconds:1747103329 nanos:850965507}" May 13 02:28:50.158250 sshd[4740]: Connection closed by 172.24.4.1 port 38130 May 13 02:28:50.163232 sshd-session[4673]: pam_unix(sshd:session): session closed for user core May 13 02:28:50.178995 systemd-logind[1455]: Session 29 logged out. Waiting for processes to exit. May 13 02:28:50.182790 systemd[1]: sshd@26-172.24.4.210:22-172.24.4.1:38130.service: Deactivated successfully. May 13 02:28:50.194645 systemd[1]: session-29.scope: Deactivated successfully. May 13 02:28:50.199266 systemd-logind[1455]: Removed session 29. May 13 02:28:52.342558 containerd[1480]: time="2025-05-13T02:28:52.341204902Z" level=info msg="StopPodSandbox for \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\"" May 13 02:28:52.342558 containerd[1480]: time="2025-05-13T02:28:52.342266045Z" level=info msg="TearDown network for sandbox \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" successfully" May 13 02:28:52.342558 containerd[1480]: time="2025-05-13T02:28:52.342325617Z" level=info msg="StopPodSandbox for \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" returns successfully" May 13 02:28:52.347177 containerd[1480]: time="2025-05-13T02:28:52.346520123Z" level=info msg="RemovePodSandbox for \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\"" May 13 02:28:52.347177 containerd[1480]: time="2025-05-13T02:28:52.346615902Z" level=info msg="Forcibly stopping sandbox \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\"" May 13 02:28:52.347177 containerd[1480]: time="2025-05-13T02:28:52.346880249Z" level=info msg="TearDown network for sandbox \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" successfully" May 13 02:28:52.351713 containerd[1480]: time="2025-05-13T02:28:52.351651068Z" level=info msg="Ensure that sandbox d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367 in task-service has been cleanup successfully" May 13 02:28:52.359622 containerd[1480]: time="2025-05-13T02:28:52.359560198Z" level=info msg="RemovePodSandbox \"d947fdf12e23fcaf9d5360f33e3eb2d6d2bffabf24b60a1af9106e237e6cf367\" returns successfully" May 13 02:28:52.361876 containerd[1480]: time="2025-05-13T02:28:52.361498678Z" level=info msg="StopPodSandbox for \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\"" May 13 02:28:52.361876 containerd[1480]: time="2025-05-13T02:28:52.361778464Z" level=info msg="TearDown network for sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" successfully" May 13 02:28:52.361876 containerd[1480]: time="2025-05-13T02:28:52.361815693Z" level=info msg="StopPodSandbox for \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" returns successfully" May 13 02:28:52.364529 containerd[1480]: time="2025-05-13T02:28:52.362943311Z" level=info msg="RemovePodSandbox for \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\"" May 13 02:28:52.364529 containerd[1480]: time="2025-05-13T02:28:52.363022891Z" level=info msg="Forcibly stopping sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\"" May 13 02:28:52.364529 containerd[1480]: time="2025-05-13T02:28:52.363187950Z" level=info msg="TearDown network for sandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" successfully" May 13 02:28:52.366319 containerd[1480]: time="2025-05-13T02:28:52.366259287Z" level=info msg="Ensure that sandbox f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621 in task-service has been cleanup successfully" May 13 02:28:52.381265 containerd[1480]: time="2025-05-13T02:28:52.381192638Z" level=info msg="RemovePodSandbox \"f8c9ab615451e1f44ee30b374ce014a3b78f2ece99b542b34f324ffa3be1b621\" returns successfully" May 13 02:28:54.002559 update_engine[1456]: I20250513 02:28:54.001585 1456 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 13 02:28:54.002559 update_engine[1456]: I20250513 02:28:54.001924 1456 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 13 02:28:54.003973 update_engine[1456]: I20250513 02:28:54.003121 1456 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 13 02:28:54.006226 update_engine[1456]: I20250513 02:28:54.006135 1456 omaha_request_params.cc:62] Current group set to alpha May 13 02:28:54.008233 update_engine[1456]: I20250513 02:28:54.006904 1456 update_attempter.cc:499] Already updated boot flags. Skipping. May 13 02:28:54.008233 update_engine[1456]: I20250513 02:28:54.006944 1456 update_attempter.cc:643] Scheduling an action processor start. May 13 02:28:54.008233 update_engine[1456]: I20250513 02:28:54.007021 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 02:28:54.008233 update_engine[1456]: I20250513 02:28:54.007246 1456 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 13 02:28:54.008233 update_engine[1456]: I20250513 02:28:54.007599 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 02:28:54.008233 update_engine[1456]: I20250513 02:28:54.007634 1456 omaha_request_action.cc:272] Request: May 13 02:28:54.008233 update_engine[1456]: May 13 02:28:54.008233 update_engine[1456]: May 13 02:28:54.008233 update_engine[1456]: May 13 02:28:54.008233 update_engine[1456]: May 13 02:28:54.008233 update_engine[1456]: May 13 02:28:54.008233 update_engine[1456]: May 13 02:28:54.008233 update_engine[1456]: May 13 02:28:54.008233 update_engine[1456]: May 13 02:28:54.008233 update_engine[1456]: I20250513 02:28:54.007673 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 02:28:54.015647 update_engine[1456]: I20250513 02:28:54.015551 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 02:28:54.015937 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 13 02:28:54.017342 update_engine[1456]: I20250513 02:28:54.017202 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 02:28:54.025269 update_engine[1456]: E20250513 02:28:54.025179 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 02:28:54.025514 update_engine[1456]: I20250513 02:28:54.025406 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 13 02:29:03.995761 update_engine[1456]: I20250513 02:29:03.995568 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 02:29:03.996848 update_engine[1456]: I20250513 02:29:03.996093 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 02:29:03.996848 update_engine[1456]: I20250513 02:29:03.996686 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 02:29:04.002116 update_engine[1456]: E20250513 02:29:04.002020 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 02:29:04.002328 update_engine[1456]: I20250513 02:29:04.002173 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 13 02:29:14.001785 update_engine[1456]: I20250513 02:29:14.001636 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 02:29:14.002840 update_engine[1456]: I20250513 02:29:14.002169 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 02:29:14.002840 update_engine[1456]: I20250513 02:29:14.002761 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 02:29:14.008273 update_engine[1456]: E20250513 02:29:14.008165 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 02:29:14.008439 update_engine[1456]: I20250513 02:29:14.008312 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 13 02:29:23.998383 update_engine[1456]: I20250513 02:29:23.998131 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 02:29:23.999339 update_engine[1456]: I20250513 02:29:23.998831 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 02:29:23.999339 update_engine[1456]: I20250513 02:29:23.999301 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 02:29:24.005003 update_engine[1456]: E20250513 02:29:24.004910 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 02:29:24.005244 update_engine[1456]: I20250513 02:29:24.005028 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 02:29:24.005244 update_engine[1456]: I20250513 02:29:24.005050 1456 omaha_request_action.cc:617] Omaha request response: May 13 02:29:24.005244 update_engine[1456]: E20250513 02:29:24.005198 1456 omaha_request_action.cc:636] Omaha request network transfer failed. May 13 02:29:24.005555 update_engine[1456]: I20250513 02:29:24.005292 1456 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 13 02:29:24.005555 update_engine[1456]: I20250513 02:29:24.005313 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 02:29:24.005555 update_engine[1456]: I20250513 02:29:24.005326 1456 update_attempter.cc:306] Processing Done. May 13 02:29:24.005555 update_engine[1456]: E20250513 02:29:24.005374 1456 update_attempter.cc:619] Update failed. May 13 02:29:24.005555 update_engine[1456]: I20250513 02:29:24.005404 1456 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 13 02:29:24.005555 update_engine[1456]: I20250513 02:29:24.005419 1456 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 13 02:29:24.005555 update_engine[1456]: I20250513 02:29:24.005432 1456 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 13 02:29:24.006614 update_engine[1456]: I20250513 02:29:24.005651 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 02:29:24.006614 update_engine[1456]: I20250513 02:29:24.005708 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 02:29:24.006614 update_engine[1456]: I20250513 02:29:24.005723 1456 omaha_request_action.cc:272] Request: May 13 02:29:24.006614 update_engine[1456]: May 13 02:29:24.006614 update_engine[1456]: May 13 02:29:24.006614 update_engine[1456]: May 13 02:29:24.006614 update_engine[1456]: May 13 02:29:24.006614 update_engine[1456]: May 13 02:29:24.006614 update_engine[1456]: May 13 02:29:24.006614 update_engine[1456]: I20250513 02:29:24.005738 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 02:29:24.006614 update_engine[1456]: I20250513 02:29:24.006139 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 02:29:24.007317 update_engine[1456]: I20250513 02:29:24.006633 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 02:29:24.007981 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 13 02:29:24.011871 update_engine[1456]: E20250513 02:29:24.011765 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 02:29:24.012042 update_engine[1456]: I20250513 02:29:24.011917 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 02:29:24.012042 update_engine[1456]: I20250513 02:29:24.011946 1456 omaha_request_action.cc:617] Omaha request response: May 13 02:29:24.012042 update_engine[1456]: I20250513 02:29:24.011961 1456 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 02:29:24.012042 update_engine[1456]: I20250513 02:29:24.011974 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 02:29:24.012042 update_engine[1456]: I20250513 02:29:24.011986 1456 update_attempter.cc:306] Processing Done. May 13 02:29:24.012042 update_engine[1456]: I20250513 02:29:24.012001 1456 update_attempter.cc:310] Error event sent. May 13 02:29:24.013281 update_engine[1456]: I20250513 02:29:24.012038 1456 update_check_scheduler.cc:74] Next update check in 42m14s May 13 02:29:24.013500 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0