May 15 01:04:53.066973 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:09:34 -00 2025 May 15 01:04:53.067003 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e0c956f61127e47bb23a2bdeb0592b0ff91bd857e2344d0bf321acb67c279f1a May 15 01:04:53.067013 kernel: BIOS-provided physical RAM map: May 15 01:04:53.067021 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 01:04:53.067029 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 01:04:53.067038 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 01:04:53.067047 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 15 01:04:53.067055 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 15 01:04:53.067063 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 01:04:53.067070 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 01:04:53.067078 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 15 01:04:53.067086 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 01:04:53.067093 kernel: NX (Execute Disable) protection: active May 15 01:04:53.067101 kernel: APIC: Static calls initialized May 15 01:04:53.067111 kernel: SMBIOS 3.0.0 present. May 15 01:04:53.067120 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 15 01:04:53.067127 kernel: Hypervisor detected: KVM May 15 01:04:53.067135 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 01:04:53.067143 kernel: kvm-clock: using sched offset of 3633217994 cycles May 15 01:04:53.067151 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 01:04:53.067161 kernel: tsc: Detected 1996.249 MHz processor May 15 01:04:53.067170 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 01:04:53.067178 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 01:04:53.067187 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 15 01:04:53.067195 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 01:04:53.067204 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 01:04:53.067212 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 15 01:04:53.067220 kernel: ACPI: Early table checksum verification disabled May 15 01:04:53.067230 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 15 01:04:53.067238 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 01:04:53.067246 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 01:04:53.067254 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 01:04:53.067262 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 15 01:04:53.070225 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 01:04:53.070238 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 01:04:53.070247 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 15 01:04:53.070256 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 15 01:04:53.070284 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 15 01:04:53.070293 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 15 01:04:53.070302 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 15 01:04:53.070314 kernel: No NUMA configuration found May 15 01:04:53.070322 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 15 01:04:53.070331 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 15 01:04:53.070340 kernel: Zone ranges: May 15 01:04:53.070350 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 01:04:53.070358 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 01:04:53.070367 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 15 01:04:53.070376 kernel: Movable zone start for each node May 15 01:04:53.070384 kernel: Early memory node ranges May 15 01:04:53.070393 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 01:04:53.070401 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 15 01:04:53.070410 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 15 01:04:53.070420 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 15 01:04:53.070429 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 01:04:53.070437 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 01:04:53.070446 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 01:04:53.070454 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 01:04:53.070463 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 01:04:53.070472 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 01:04:53.070482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 01:04:53.070495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 01:04:53.070511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 01:04:53.070520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 01:04:53.070529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 01:04:53.070538 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 01:04:53.070546 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 15 01:04:53.070555 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 01:04:53.070563 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 15 01:04:53.070572 kernel: Booting paravirtualized kernel on KVM May 15 01:04:53.070580 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 01:04:53.070591 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 01:04:53.070600 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 15 01:04:53.070608 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 15 01:04:53.070617 kernel: pcpu-alloc: [0] 0 1 May 15 01:04:53.070625 kernel: kvm-guest: PV spinlocks disabled, no host support May 15 01:04:53.070636 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e0c956f61127e47bb23a2bdeb0592b0ff91bd857e2344d0bf321acb67c279f1a May 15 01:04:53.070645 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 01:04:53.070653 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 01:04:53.070664 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 01:04:53.070672 kernel: Fallback order for Node 0: 0 May 15 01:04:53.070681 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 15 01:04:53.070689 kernel: Policy zone: Normal May 15 01:04:53.070698 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 01:04:53.070706 kernel: software IO TLB: area num 2. May 15 01:04:53.070715 kernel: Memory: 3962108K/4193772K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 231404K reserved, 0K cma-reserved) May 15 01:04:53.070724 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 01:04:53.070732 kernel: ftrace: allocating 37993 entries in 149 pages May 15 01:04:53.070742 kernel: ftrace: allocated 149 pages with 4 groups May 15 01:04:53.070751 kernel: Dynamic Preempt: voluntary May 15 01:04:53.070759 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 01:04:53.070768 kernel: rcu: RCU event tracing is enabled. May 15 01:04:53.070777 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 01:04:53.070786 kernel: Trampoline variant of Tasks RCU enabled. May 15 01:04:53.070794 kernel: Rude variant of Tasks RCU enabled. May 15 01:04:53.070803 kernel: Tracing variant of Tasks RCU enabled. May 15 01:04:53.070811 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 01:04:53.070821 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 01:04:53.070830 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 01:04:53.070839 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 01:04:53.070847 kernel: Console: colour VGA+ 80x25 May 15 01:04:53.070856 kernel: printk: console [tty0] enabled May 15 01:04:53.070864 kernel: printk: console [ttyS0] enabled May 15 01:04:53.070873 kernel: ACPI: Core revision 20230628 May 15 01:04:53.070881 kernel: APIC: Switch to symmetric I/O mode setup May 15 01:04:53.070889 kernel: x2apic enabled May 15 01:04:53.070900 kernel: APIC: Switched APIC routing to: physical x2apic May 15 01:04:53.070908 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 01:04:53.070917 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 01:04:53.070926 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 15 01:04:53.070934 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 15 01:04:53.070943 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 15 01:04:53.070951 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 01:04:53.070960 kernel: Spectre V2 : Mitigation: Retpolines May 15 01:04:53.070968 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 01:04:53.070978 kernel: Speculative Store Bypass: Vulnerable May 15 01:04:53.070987 kernel: x86/fpu: x87 FPU will use FXSAVE May 15 01:04:53.070995 kernel: Freeing SMP alternatives memory: 32K May 15 01:04:53.071004 kernel: pid_max: default: 32768 minimum: 301 May 15 01:04:53.071018 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 01:04:53.071028 kernel: landlock: Up and running. May 15 01:04:53.071037 kernel: SELinux: Initializing. May 15 01:04:53.071046 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 01:04:53.071055 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 01:04:53.071064 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 15 01:04:53.071073 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 01:04:53.071082 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 01:04:53.071093 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 01:04:53.071102 kernel: Performance Events: AMD PMU driver. May 15 01:04:53.071111 kernel: ... version: 0 May 15 01:04:53.071120 kernel: ... bit width: 48 May 15 01:04:53.071128 kernel: ... generic registers: 4 May 15 01:04:53.071139 kernel: ... value mask: 0000ffffffffffff May 15 01:04:53.071148 kernel: ... max period: 00007fffffffffff May 15 01:04:53.071157 kernel: ... fixed-purpose events: 0 May 15 01:04:53.071166 kernel: ... event mask: 000000000000000f May 15 01:04:53.071174 kernel: signal: max sigframe size: 1440 May 15 01:04:53.071183 kernel: rcu: Hierarchical SRCU implementation. May 15 01:04:53.071192 kernel: rcu: Max phase no-delay instances is 400. May 15 01:04:53.071201 kernel: smp: Bringing up secondary CPUs ... May 15 01:04:53.071210 kernel: smpboot: x86: Booting SMP configuration: May 15 01:04:53.071221 kernel: .... node #0, CPUs: #1 May 15 01:04:53.071230 kernel: smp: Brought up 1 node, 2 CPUs May 15 01:04:53.071238 kernel: smpboot: Max logical packages: 2 May 15 01:04:53.071247 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 15 01:04:53.071256 kernel: devtmpfs: initialized May 15 01:04:53.071265 kernel: x86/mm: Memory block size: 128MB May 15 01:04:53.071288 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 01:04:53.071305 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 01:04:53.071314 kernel: pinctrl core: initialized pinctrl subsystem May 15 01:04:53.071325 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 01:04:53.071334 kernel: audit: initializing netlink subsys (disabled) May 15 01:04:53.071343 kernel: audit: type=2000 audit(1747271091.876:1): state=initialized audit_enabled=0 res=1 May 15 01:04:53.071352 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 01:04:53.071361 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 01:04:53.071370 kernel: cpuidle: using governor menu May 15 01:04:53.071379 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 01:04:53.071388 kernel: dca service started, version 1.12.1 May 15 01:04:53.071397 kernel: PCI: Using configuration type 1 for base access May 15 01:04:53.071407 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 01:04:53.071416 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 01:04:53.071425 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 01:04:53.071434 kernel: ACPI: Added _OSI(Module Device) May 15 01:04:53.071443 kernel: ACPI: Added _OSI(Processor Device) May 15 01:04:53.071452 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 01:04:53.071461 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 01:04:53.071470 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 01:04:53.071478 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 01:04:53.071489 kernel: ACPI: Interpreter enabled May 15 01:04:53.071497 kernel: ACPI: PM: (supports S0 S3 S5) May 15 01:04:53.071506 kernel: ACPI: Using IOAPIC for interrupt routing May 15 01:04:53.071515 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 01:04:53.071524 kernel: PCI: Using E820 reservations for host bridge windows May 15 01:04:53.071533 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 15 01:04:53.071542 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 01:04:53.071689 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 15 01:04:53.071792 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 15 01:04:53.071887 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 15 01:04:53.071902 kernel: acpiphp: Slot [3] registered May 15 01:04:53.071912 kernel: acpiphp: Slot [4] registered May 15 01:04:53.071921 kernel: acpiphp: Slot [5] registered May 15 01:04:53.071931 kernel: acpiphp: Slot [6] registered May 15 01:04:53.071941 kernel: acpiphp: Slot [7] registered May 15 01:04:53.071950 kernel: acpiphp: Slot [8] registered May 15 01:04:53.071959 kernel: acpiphp: Slot [9] registered May 15 01:04:53.071972 kernel: acpiphp: Slot [10] registered May 15 01:04:53.071981 kernel: acpiphp: Slot [11] registered May 15 01:04:53.071991 kernel: acpiphp: Slot [12] registered May 15 01:04:53.072000 kernel: acpiphp: Slot [13] registered May 15 01:04:53.072009 kernel: acpiphp: Slot [14] registered May 15 01:04:53.072019 kernel: acpiphp: Slot [15] registered May 15 01:04:53.072028 kernel: acpiphp: Slot [16] registered May 15 01:04:53.072038 kernel: acpiphp: Slot [17] registered May 15 01:04:53.072047 kernel: acpiphp: Slot [18] registered May 15 01:04:53.072058 kernel: acpiphp: Slot [19] registered May 15 01:04:53.072067 kernel: acpiphp: Slot [20] registered May 15 01:04:53.072077 kernel: acpiphp: Slot [21] registered May 15 01:04:53.072086 kernel: acpiphp: Slot [22] registered May 15 01:04:53.072096 kernel: acpiphp: Slot [23] registered May 15 01:04:53.072105 kernel: acpiphp: Slot [24] registered May 15 01:04:53.072114 kernel: acpiphp: Slot [25] registered May 15 01:04:53.072124 kernel: acpiphp: Slot [26] registered May 15 01:04:53.072160 kernel: acpiphp: Slot [27] registered May 15 01:04:53.072171 kernel: acpiphp: Slot [28] registered May 15 01:04:53.072184 kernel: acpiphp: Slot [29] registered May 15 01:04:53.072195 kernel: acpiphp: Slot [30] registered May 15 01:04:53.072206 kernel: acpiphp: Slot [31] registered May 15 01:04:53.072217 kernel: PCI host bridge to bus 0000:00 May 15 01:04:53.072358 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 01:04:53.072464 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 01:04:53.072566 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 01:04:53.072689 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 01:04:53.072790 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 15 01:04:53.072889 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 01:04:53.073025 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 15 01:04:53.073152 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 15 01:04:53.073298 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 15 01:04:53.073420 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 15 01:04:53.073540 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 15 01:04:53.073651 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 15 01:04:53.073756 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 15 01:04:53.073859 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 15 01:04:53.073975 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 15 01:04:53.074082 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 15 01:04:53.074193 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 15 01:04:53.076198 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 15 01:04:53.076965 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 15 01:04:53.077074 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 15 01:04:53.077179 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 15 01:04:53.084255 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 15 01:04:53.084398 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 01:04:53.084531 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 15 01:04:53.084639 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 15 01:04:53.084745 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 15 01:04:53.084847 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 15 01:04:53.084951 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 15 01:04:53.085062 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 15 01:04:53.085168 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 15 01:04:53.085313 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 15 01:04:53.085422 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 15 01:04:53.085535 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 15 01:04:53.085640 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 15 01:04:53.085742 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 15 01:04:53.085854 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 15 01:04:53.085971 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 15 01:04:53.086082 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 15 01:04:53.086186 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 15 01:04:53.086201 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 01:04:53.086212 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 01:04:53.086222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 01:04:53.086232 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 01:04:53.086242 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 15 01:04:53.086253 kernel: iommu: Default domain type: Translated May 15 01:04:53.087302 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 01:04:53.087321 kernel: PCI: Using ACPI for IRQ routing May 15 01:04:53.087332 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 01:04:53.087344 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 01:04:53.087355 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 15 01:04:53.087468 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 15 01:04:53.087573 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 15 01:04:53.087676 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 01:04:53.087693 kernel: vgaarb: loaded May 15 01:04:53.087708 kernel: clocksource: Switched to clocksource kvm-clock May 15 01:04:53.087719 kernel: VFS: Disk quotas dquot_6.6.0 May 15 01:04:53.087730 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 01:04:53.087741 kernel: pnp: PnP ACPI init May 15 01:04:53.087850 kernel: pnp 00:03: [dma 2] May 15 01:04:53.087868 kernel: pnp: PnP ACPI: found 5 devices May 15 01:04:53.087879 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 01:04:53.087890 kernel: NET: Registered PF_INET protocol family May 15 01:04:53.087904 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 01:04:53.087915 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 01:04:53.087926 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 01:04:53.087937 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 01:04:53.087948 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 01:04:53.087959 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 01:04:53.087970 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 01:04:53.087981 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 01:04:53.087991 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 01:04:53.088004 kernel: NET: Registered PF_XDP protocol family May 15 01:04:53.088101 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 01:04:53.088216 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 01:04:53.089356 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 01:04:53.089455 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 15 01:04:53.089550 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 15 01:04:53.089661 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 15 01:04:53.089770 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 01:04:53.089791 kernel: PCI: CLS 0 bytes, default 64 May 15 01:04:53.089802 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 01:04:53.089812 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 15 01:04:53.089824 kernel: Initialise system trusted keyrings May 15 01:04:53.089835 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 01:04:53.089845 kernel: Key type asymmetric registered May 15 01:04:53.089855 kernel: Asymmetric key parser 'x509' registered May 15 01:04:53.089866 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 01:04:53.089876 kernel: io scheduler mq-deadline registered May 15 01:04:53.089889 kernel: io scheduler kyber registered May 15 01:04:53.089900 kernel: io scheduler bfq registered May 15 01:04:53.089910 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 01:04:53.089921 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 15 01:04:53.089932 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 15 01:04:53.089942 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 15 01:04:53.089952 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 15 01:04:53.089964 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 01:04:53.089974 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 01:04:53.089986 kernel: random: crng init done May 15 01:04:53.089997 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 01:04:53.090007 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 01:04:53.090017 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 01:04:53.090135 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 01:04:53.090154 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 01:04:53.090248 kernel: rtc_cmos 00:04: registered as rtc0 May 15 01:04:53.092446 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T01:04:52 UTC (1747271092) May 15 01:04:53.092555 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 15 01:04:53.092571 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 01:04:53.092582 kernel: NET: Registered PF_INET6 protocol family May 15 01:04:53.092592 kernel: Segment Routing with IPv6 May 15 01:04:53.092603 kernel: In-situ OAM (IOAM) with IPv6 May 15 01:04:53.092613 kernel: NET: Registered PF_PACKET protocol family May 15 01:04:53.092623 kernel: Key type dns_resolver registered May 15 01:04:53.092633 kernel: IPI shorthand broadcast: enabled May 15 01:04:53.092643 kernel: sched_clock: Marking stable (982006131, 177431326)->(1192539027, -33101570) May 15 01:04:53.092658 kernel: registered taskstats version 1 May 15 01:04:53.092668 kernel: Loading compiled-in X.509 certificates May 15 01:04:53.092678 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 4f9bc5b8797c7efeb1fcd74892dea83a6cb9d390' May 15 01:04:53.092688 kernel: Key type .fscrypt registered May 15 01:04:53.092698 kernel: Key type fscrypt-provisioning registered May 15 01:04:53.092709 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 01:04:53.092719 kernel: ima: Allocated hash algorithm: sha1 May 15 01:04:53.092729 kernel: ima: No architecture policies found May 15 01:04:53.092741 kernel: clk: Disabling unused clocks May 15 01:04:53.092751 kernel: Freeing unused kernel image (initmem) memory: 43604K May 15 01:04:53.092761 kernel: Write protecting the kernel read-only data: 40960k May 15 01:04:53.092772 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 15 01:04:53.092782 kernel: Run /init as init process May 15 01:04:53.092792 kernel: with arguments: May 15 01:04:53.092802 kernel: /init May 15 01:04:53.092812 kernel: with environment: May 15 01:04:53.092822 kernel: HOME=/ May 15 01:04:53.092832 kernel: TERM=linux May 15 01:04:53.092844 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 01:04:53.092856 systemd[1]: Successfully made /usr/ read-only. May 15 01:04:53.092870 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 01:04:53.092882 systemd[1]: Detected virtualization kvm. May 15 01:04:53.092893 systemd[1]: Detected architecture x86-64. May 15 01:04:53.092903 systemd[1]: Running in initrd. May 15 01:04:53.092914 systemd[1]: No hostname configured, using default hostname. May 15 01:04:53.092927 systemd[1]: Hostname set to . May 15 01:04:53.092938 systemd[1]: Initializing machine ID from VM UUID. May 15 01:04:53.092948 systemd[1]: Queued start job for default target initrd.target. May 15 01:04:53.092959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 01:04:53.092970 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 01:04:53.092982 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 01:04:53.093002 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 01:04:53.093016 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 01:04:53.093028 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 01:04:53.093040 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 01:04:53.093052 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 01:04:53.093063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 01:04:53.093076 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 01:04:53.093088 systemd[1]: Reached target paths.target - Path Units. May 15 01:04:53.093099 systemd[1]: Reached target slices.target - Slice Units. May 15 01:04:53.093110 systemd[1]: Reached target swap.target - Swaps. May 15 01:04:53.093121 systemd[1]: Reached target timers.target - Timer Units. May 15 01:04:53.093133 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 01:04:53.093144 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 01:04:53.093155 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 01:04:53.093167 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 01:04:53.093180 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 01:04:53.093191 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 01:04:53.093202 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 01:04:53.093214 systemd[1]: Reached target sockets.target - Socket Units. May 15 01:04:53.093225 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 01:04:53.093236 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 01:04:53.093247 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 01:04:53.093259 systemd[1]: Starting systemd-fsck-usr.service... May 15 01:04:53.093294 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 01:04:53.093311 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 01:04:53.093322 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 01:04:53.093334 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 01:04:53.093345 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 01:04:53.093357 systemd[1]: Finished systemd-fsck-usr.service. May 15 01:04:53.093397 systemd-journald[184]: Collecting audit messages is disabled. May 15 01:04:53.093427 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 01:04:53.093442 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 01:04:53.093455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 01:04:53.093469 systemd-journald[184]: Journal started May 15 01:04:53.093499 systemd-journald[184]: Runtime Journal (/run/log/journal/ba9d54c60db64ed28f9e30457ddd1379) is 8M, max 78.2M, 70.2M free. May 15 01:04:53.097355 systemd-modules-load[186]: Inserted module 'overlay' May 15 01:04:53.141231 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 01:04:53.141255 kernel: Bridge firewalling registered May 15 01:04:53.128728 systemd-modules-load[186]: Inserted module 'br_netfilter' May 15 01:04:53.149400 systemd[1]: Started systemd-journald.service - Journal Service. May 15 01:04:53.150249 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 01:04:53.150913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:04:53.154383 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 01:04:53.158451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 01:04:53.160087 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 01:04:53.167545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 01:04:53.177156 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 01:04:53.179689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 01:04:53.184398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 01:04:53.185976 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 01:04:53.193449 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 01:04:53.213394 dracut-cmdline[223]: dracut-dracut-053 May 15 01:04:53.217077 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e0c956f61127e47bb23a2bdeb0592b0ff91bd857e2344d0bf321acb67c279f1a May 15 01:04:53.227087 systemd-resolved[217]: Positive Trust Anchors: May 15 01:04:53.227108 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 01:04:53.227151 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 01:04:53.231391 systemd-resolved[217]: Defaulting to hostname 'linux'. May 15 01:04:53.232881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 01:04:53.233553 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 01:04:53.305369 kernel: SCSI subsystem initialized May 15 01:04:53.316335 kernel: Loading iSCSI transport class v2.0-870. May 15 01:04:53.328728 kernel: iscsi: registered transport (tcp) May 15 01:04:53.351598 kernel: iscsi: registered transport (qla4xxx) May 15 01:04:53.351668 kernel: QLogic iSCSI HBA Driver May 15 01:04:53.410423 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 01:04:53.413398 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 01:04:53.480992 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 01:04:53.481098 kernel: device-mapper: uevent: version 1.0.3 May 15 01:04:53.486300 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 01:04:53.547370 kernel: raid6: sse2x4 gen() 5211 MB/s May 15 01:04:53.566343 kernel: raid6: sse2x2 gen() 5983 MB/s May 15 01:04:53.584691 kernel: raid6: sse2x1 gen() 8919 MB/s May 15 01:04:53.584757 kernel: raid6: using algorithm sse2x1 gen() 8919 MB/s May 15 01:04:53.603719 kernel: raid6: .... xor() 7401 MB/s, rmw enabled May 15 01:04:53.603783 kernel: raid6: using ssse3x2 recovery algorithm May 15 01:04:53.626724 kernel: xor: measuring software checksum speed May 15 01:04:53.626795 kernel: prefetch64-sse : 17065 MB/sec May 15 01:04:53.628014 kernel: generic_sse : 16234 MB/sec May 15 01:04:53.628054 kernel: xor: using function: prefetch64-sse (17065 MB/sec) May 15 01:04:53.801454 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 01:04:53.817071 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 01:04:53.822483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 01:04:53.871430 systemd-udevd[405]: Using default interface naming scheme 'v255'. May 15 01:04:53.882548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 01:04:53.889502 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 01:04:53.927592 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation May 15 01:04:53.975237 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 01:04:53.980212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 01:04:54.037711 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 01:04:54.048162 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 01:04:54.100429 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 01:04:54.108993 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 15 01:04:54.111929 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 15 01:04:54.108238 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 01:04:54.111376 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 01:04:54.111888 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 01:04:54.113772 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 01:04:54.137298 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 01:04:54.137366 kernel: GPT:17805311 != 20971519 May 15 01:04:54.137382 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 01:04:54.139159 kernel: GPT:17805311 != 20971519 May 15 01:04:54.139202 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 01:04:54.140530 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 01:04:54.142352 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 01:04:54.163305 kernel: libata version 3.00 loaded. May 15 01:04:54.170369 kernel: ata_piix 0000:00:01.1: version 2.13 May 15 01:04:54.172820 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 01:04:54.182197 kernel: scsi host0: ata_piix May 15 01:04:54.182391 kernel: scsi host1: ata_piix May 15 01:04:54.182519 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 15 01:04:54.182533 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 15 01:04:54.172893 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 01:04:54.181944 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 01:04:54.183050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 01:04:54.183111 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:04:54.185344 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 01:04:54.187374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 01:04:54.192843 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 01:04:54.209300 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (466) May 15 01:04:54.220312 kernel: BTRFS: device fsid 267fa270-7a71-43aa-9209-0280512688b5 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (462) May 15 01:04:54.252254 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 01:04:54.272445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:04:54.285141 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 01:04:54.296954 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 01:04:54.305728 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 01:04:54.306360 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 01:04:54.309408 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 01:04:54.311372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 01:04:54.330826 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 01:04:54.332218 disk-uuid[507]: Primary Header is updated. May 15 01:04:54.332218 disk-uuid[507]: Secondary Entries is updated. May 15 01:04:54.332218 disk-uuid[507]: Secondary Header is updated. May 15 01:04:54.342742 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 01:04:54.351294 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 01:04:55.365400 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 01:04:55.366410 disk-uuid[516]: The operation has completed successfully. May 15 01:04:55.457242 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 01:04:55.457373 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 01:04:55.495836 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 01:04:55.515874 sh[527]: Success May 15 01:04:55.537309 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 15 01:04:55.641882 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 01:04:55.653452 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 01:04:55.659237 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 01:04:55.699250 kernel: BTRFS info (device dm-0): first mount of filesystem 267fa270-7a71-43aa-9209-0280512688b5 May 15 01:04:55.699372 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 01:04:55.703950 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 01:04:55.708914 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 01:04:55.712790 kernel: BTRFS info (device dm-0): using free space tree May 15 01:04:55.748822 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 01:04:55.751081 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 01:04:55.754190 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 01:04:55.758547 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 01:04:55.804349 kernel: BTRFS info (device vda6): first mount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 15 01:04:55.810696 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 01:04:55.810761 kernel: BTRFS info (device vda6): using free space tree May 15 01:04:55.825335 kernel: BTRFS info (device vda6): auto enabling async discard May 15 01:04:55.834313 kernel: BTRFS info (device vda6): last unmount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 15 01:04:55.853738 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 01:04:55.858532 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 01:04:55.924912 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 01:04:55.933923 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 01:04:55.978229 systemd-networkd[707]: lo: Link UP May 15 01:04:55.978244 systemd-networkd[707]: lo: Gained carrier May 15 01:04:55.981831 systemd-networkd[707]: Enumeration completed May 15 01:04:55.981942 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 01:04:55.982748 systemd[1]: Reached target network.target - Network. May 15 01:04:55.983460 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 01:04:55.983465 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 01:04:55.984584 systemd-networkd[707]: eth0: Link UP May 15 01:04:55.984590 systemd-networkd[707]: eth0: Gained carrier May 15 01:04:55.984603 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 01:04:55.998004 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.204/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 15 01:04:56.036949 ignition[633]: Ignition 2.20.0 May 15 01:04:56.036966 ignition[633]: Stage: fetch-offline May 15 01:04:56.039516 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 01:04:56.037021 ignition[633]: no configs at "/usr/lib/ignition/base.d" May 15 01:04:56.037033 ignition[633]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 01:04:56.041389 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 01:04:56.037220 ignition[633]: parsed url from cmdline: "" May 15 01:04:56.037224 ignition[633]: no config URL provided May 15 01:04:56.037231 ignition[633]: reading system config file "/usr/lib/ignition/user.ign" May 15 01:04:56.037241 ignition[633]: no config at "/usr/lib/ignition/user.ign" May 15 01:04:56.037248 ignition[633]: failed to fetch config: resource requires networking May 15 01:04:56.037493 ignition[633]: Ignition finished successfully May 15 01:04:56.063940 ignition[717]: Ignition 2.20.0 May 15 01:04:56.064807 ignition[717]: Stage: fetch May 15 01:04:56.064985 ignition[717]: no configs at "/usr/lib/ignition/base.d" May 15 01:04:56.064997 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 01:04:56.065119 ignition[717]: parsed url from cmdline: "" May 15 01:04:56.065123 ignition[717]: no config URL provided May 15 01:04:56.065129 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" May 15 01:04:56.065138 ignition[717]: no config at "/usr/lib/ignition/user.ign" May 15 01:04:56.065229 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 15 01:04:56.065250 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 15 01:04:56.065318 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 15 01:04:56.301812 ignition[717]: GET result: OK May 15 01:04:56.301988 ignition[717]: parsing config with SHA512: fc18aa8c8a07089c45b866f62c4041b8253c24a22f092d9a1887400b23dc6528c8c925f32004966affa20f129a09416e8658180c97ffb6cf08ac5d130f01a575 May 15 01:04:56.317730 unknown[717]: fetched base config from "system" May 15 01:04:56.317786 unknown[717]: fetched base config from "system" May 15 01:04:56.319253 ignition[717]: fetch: fetch complete May 15 01:04:56.317804 unknown[717]: fetched user config from "openstack" May 15 01:04:56.319267 ignition[717]: fetch: fetch passed May 15 01:04:56.322828 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 01:04:56.319448 ignition[717]: Ignition finished successfully May 15 01:04:56.327557 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 01:04:56.372572 ignition[724]: Ignition 2.20.0 May 15 01:04:56.372600 ignition[724]: Stage: kargs May 15 01:04:56.373014 ignition[724]: no configs at "/usr/lib/ignition/base.d" May 15 01:04:56.373042 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 01:04:56.375591 ignition[724]: kargs: kargs passed May 15 01:04:56.375694 ignition[724]: Ignition finished successfully May 15 01:04:56.380335 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 01:04:56.386110 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 01:04:56.429983 ignition[730]: Ignition 2.20.0 May 15 01:04:56.431681 ignition[730]: Stage: disks May 15 01:04:56.432095 ignition[730]: no configs at "/usr/lib/ignition/base.d" May 15 01:04:56.432148 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 01:04:56.438735 ignition[730]: disks: disks passed May 15 01:04:56.438840 ignition[730]: Ignition finished successfully May 15 01:04:56.440776 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 01:04:56.443885 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 01:04:56.445927 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 01:04:56.448944 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 01:04:56.451875 systemd[1]: Reached target sysinit.target - System Initialization. May 15 01:04:56.454506 systemd[1]: Reached target basic.target - Basic System. May 15 01:04:56.459242 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 01:04:56.509093 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 15 01:04:56.529474 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 01:04:56.533958 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 01:04:56.709338 kernel: EXT4-fs (vda9): mounted filesystem 81735587-bac5-4d9e-ae49-5642e655af7f r/w with ordered data mode. Quota mode: none. May 15 01:04:56.711516 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 01:04:56.713949 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 01:04:56.717757 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 01:04:56.721355 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 01:04:56.722763 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 01:04:56.725424 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 15 01:04:56.727388 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 01:04:56.727432 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 01:04:56.737612 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 01:04:56.741398 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 01:04:56.752324 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) May 15 01:04:56.771304 kernel: BTRFS info (device vda6): first mount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 15 01:04:56.793066 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 01:04:56.793134 kernel: BTRFS info (device vda6): using free space tree May 15 01:04:56.812356 kernel: BTRFS info (device vda6): auto enabling async discard May 15 01:04:56.825937 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 01:04:56.894360 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory May 15 01:04:56.903850 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory May 15 01:04:56.911347 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory May 15 01:04:56.922552 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory May 15 01:04:57.040343 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 01:04:57.042882 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 01:04:57.047496 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 01:04:57.060410 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 01:04:57.065291 kernel: BTRFS info (device vda6): last unmount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 15 01:04:57.091722 ignition[863]: INFO : Ignition 2.20.0 May 15 01:04:57.093546 ignition[863]: INFO : Stage: mount May 15 01:04:57.093546 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 01:04:57.093546 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 01:04:57.097057 ignition[863]: INFO : mount: mount passed May 15 01:04:57.097057 ignition[863]: INFO : Ignition finished successfully May 15 01:04:57.097863 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 01:04:57.101400 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 01:04:57.589511 systemd-networkd[707]: eth0: Gained IPv6LL May 15 01:05:03.967756 coreos-metadata[749]: May 15 01:05:03.967 WARN failed to locate config-drive, using the metadata service API instead May 15 01:05:04.011185 coreos-metadata[749]: May 15 01:05:04.011 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 15 01:05:04.023633 coreos-metadata[749]: May 15 01:05:04.023 INFO Fetch successful May 15 01:05:04.025084 coreos-metadata[749]: May 15 01:05:04.024 INFO wrote hostname ci-4284-0-0-n-df1b790171.novalocal to /sysroot/etc/hostname May 15 01:05:04.027699 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 15 01:05:04.027936 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 15 01:05:04.036650 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 01:05:04.067544 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 01:05:04.101371 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (881) May 15 01:05:04.110334 kernel: BTRFS info (device vda6): first mount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 15 01:05:04.110443 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 01:05:04.112955 kernel: BTRFS info (device vda6): using free space tree May 15 01:05:04.124327 kernel: BTRFS info (device vda6): auto enabling async discard May 15 01:05:04.130677 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 01:05:04.182458 ignition[899]: INFO : Ignition 2.20.0 May 15 01:05:04.182458 ignition[899]: INFO : Stage: files May 15 01:05:04.185340 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 01:05:04.185340 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 01:05:04.185340 ignition[899]: DEBUG : files: compiled without relabeling support, skipping May 15 01:05:04.190959 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 01:05:04.190959 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 01:05:04.194704 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 01:05:04.196749 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 01:05:04.198627 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 01:05:04.198562 unknown[899]: wrote ssh authorized keys file for user: core May 15 01:05:04.202451 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 01:05:04.205315 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 01:05:04.274925 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 01:05:04.597764 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 01:05:04.597764 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 01:05:04.597764 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 01:05:05.331458 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 01:05:05.894711 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 01:05:05.894711 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 01:05:05.899147 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 01:05:06.355835 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 01:05:07.972121 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 01:05:07.972121 ignition[899]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 01:05:07.977323 ignition[899]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 01:05:07.977323 ignition[899]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 01:05:07.977323 ignition[899]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 01:05:07.977323 ignition[899]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 15 01:05:07.977323 ignition[899]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 15 01:05:07.977323 ignition[899]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 01:05:07.977323 ignition[899]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 01:05:07.977323 ignition[899]: INFO : files: files passed May 15 01:05:07.977323 ignition[899]: INFO : Ignition finished successfully May 15 01:05:07.976661 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 01:05:07.982403 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 01:05:07.985007 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 01:05:08.006854 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 01:05:08.006854 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 01:05:08.004849 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 01:05:08.012689 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 01:05:08.004951 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 01:05:08.009640 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 01:05:08.010633 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 01:05:08.013412 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 01:05:08.061979 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 01:05:08.062202 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 01:05:08.064408 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 01:05:08.065948 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 01:05:08.067913 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 01:05:08.069826 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 01:05:08.098515 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 01:05:08.102576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 01:05:08.133935 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 01:05:08.135743 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 01:05:08.138706 systemd[1]: Stopped target timers.target - Timer Units. May 15 01:05:08.141563 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 01:05:08.141855 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 01:05:08.144877 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 01:05:08.146690 systemd[1]: Stopped target basic.target - Basic System. May 15 01:05:08.149556 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 01:05:08.152127 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 01:05:08.154658 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 01:05:08.157736 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 01:05:08.160693 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 01:05:08.163740 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 01:05:08.166620 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 01:05:08.169683 systemd[1]: Stopped target swap.target - Swaps. May 15 01:05:08.172358 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 01:05:08.172661 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 01:05:08.175742 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 01:05:08.177806 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 01:05:08.180121 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 01:05:08.180469 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 01:05:08.183201 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 01:05:08.183597 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 01:05:08.187332 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 01:05:08.187641 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 01:05:08.189435 systemd[1]: ignition-files.service: Deactivated successfully. May 15 01:05:08.189817 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 01:05:08.195724 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 01:05:08.197685 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 01:05:08.198583 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 01:05:08.206855 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 01:05:08.208805 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 01:05:08.209192 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 01:05:08.211400 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 01:05:08.211689 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 01:05:08.224709 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 01:05:08.224824 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 01:05:08.233718 ignition[953]: INFO : Ignition 2.20.0 May 15 01:05:08.233718 ignition[953]: INFO : Stage: umount May 15 01:05:08.236495 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 01:05:08.236495 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 01:05:08.236495 ignition[953]: INFO : umount: umount passed May 15 01:05:08.236495 ignition[953]: INFO : Ignition finished successfully May 15 01:05:08.236649 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 01:05:08.236775 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 01:05:08.239564 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 01:05:08.239613 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 01:05:08.242813 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 01:05:08.242859 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 01:05:08.243852 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 01:05:08.243895 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 01:05:08.245041 systemd[1]: Stopped target network.target - Network. May 15 01:05:08.246024 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 01:05:08.246071 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 01:05:08.249218 systemd[1]: Stopped target paths.target - Path Units. May 15 01:05:08.253540 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 01:05:08.257321 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 01:05:08.258117 systemd[1]: Stopped target slices.target - Slice Units. May 15 01:05:08.259360 systemd[1]: Stopped target sockets.target - Socket Units. May 15 01:05:08.260742 systemd[1]: iscsid.socket: Deactivated successfully. May 15 01:05:08.260782 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 01:05:08.261319 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 01:05:08.261349 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 01:05:08.262266 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 01:05:08.262329 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 01:05:08.263249 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 01:05:08.263323 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 01:05:08.265201 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 01:05:08.266299 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 01:05:08.269576 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 01:05:08.270763 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 01:05:08.270873 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 01:05:08.274915 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 01:05:08.275125 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 01:05:08.275216 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 01:05:08.277212 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 01:05:08.277354 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 01:05:08.279142 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 01:05:08.280464 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 01:05:08.280512 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 01:05:08.284627 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 01:05:08.284674 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 01:05:08.287361 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 01:05:08.288051 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 01:05:08.288117 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 01:05:08.289630 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 01:05:08.289674 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 01:05:08.291409 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 01:05:08.291452 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 01:05:08.292694 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 01:05:08.292737 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 01:05:08.294166 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 01:05:08.295799 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 01:05:08.295858 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 01:05:08.308605 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 01:05:08.308758 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 01:05:08.310699 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 01:05:08.310860 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 01:05:08.312406 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 01:05:08.312468 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 01:05:08.313237 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 01:05:08.313341 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 01:05:08.314392 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 01:05:08.314439 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 01:05:08.316028 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 01:05:08.316069 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 01:05:08.317205 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 01:05:08.317248 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 01:05:08.320379 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 01:05:08.321351 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 01:05:08.321404 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 01:05:08.324084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 01:05:08.324151 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:05:08.326628 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 01:05:08.326685 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 01:05:08.333440 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 01:05:08.333553 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 01:05:08.334290 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 01:05:08.336407 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 01:05:08.355243 systemd[1]: Switching root. May 15 01:05:08.390986 systemd-journald[184]: Journal stopped May 15 01:05:10.370366 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 15 01:05:10.370414 kernel: SELinux: policy capability network_peer_controls=1 May 15 01:05:10.370431 kernel: SELinux: policy capability open_perms=1 May 15 01:05:10.370442 kernel: SELinux: policy capability extended_socket_class=1 May 15 01:05:10.370454 kernel: SELinux: policy capability always_check_network=0 May 15 01:05:10.370465 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 01:05:10.370478 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 01:05:10.370489 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 01:05:10.370504 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 01:05:10.370516 kernel: audit: type=1403 audit(1747271109.218:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 01:05:10.370528 systemd[1]: Successfully loaded SELinux policy in 81.408ms. May 15 01:05:10.370549 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.063ms. May 15 01:05:10.370563 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 01:05:10.370575 systemd[1]: Detected virtualization kvm. May 15 01:05:10.370588 systemd[1]: Detected architecture x86-64. May 15 01:05:10.370599 systemd[1]: Detected first boot. May 15 01:05:10.370614 systemd[1]: Hostname set to . May 15 01:05:10.370626 systemd[1]: Initializing machine ID from VM UUID. May 15 01:05:10.370638 zram_generator::config[1000]: No configuration found. May 15 01:05:10.370651 kernel: Guest personality initialized and is inactive May 15 01:05:10.370662 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 01:05:10.370673 kernel: Initialized host personality May 15 01:05:10.370684 kernel: NET: Registered PF_VSOCK protocol family May 15 01:05:10.370696 systemd[1]: Populated /etc with preset unit settings. May 15 01:05:10.370709 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 01:05:10.370726 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 01:05:10.370739 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 01:05:10.370751 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 01:05:10.370764 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 01:05:10.370776 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 01:05:10.370789 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 01:05:10.370804 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 01:05:10.370817 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 01:05:10.370831 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 01:05:10.370844 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 01:05:10.370856 systemd[1]: Created slice user.slice - User and Session Slice. May 15 01:05:10.370868 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 01:05:10.370881 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 01:05:10.370893 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 01:05:10.370906 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 01:05:10.370919 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 01:05:10.370934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 01:05:10.370946 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 01:05:10.370959 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 01:05:10.370971 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 01:05:10.370984 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 01:05:10.370996 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 01:05:10.371009 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 01:05:10.371023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 01:05:10.371036 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 01:05:10.371048 systemd[1]: Reached target slices.target - Slice Units. May 15 01:05:10.371060 systemd[1]: Reached target swap.target - Swaps. May 15 01:05:10.371073 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 01:05:10.371086 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 01:05:10.371098 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 01:05:10.371111 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 01:05:10.371123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 01:05:10.371135 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 01:05:10.371149 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 01:05:10.371162 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 01:05:10.371176 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 01:05:10.371188 systemd[1]: Mounting media.mount - External Media Directory... May 15 01:05:10.371201 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:05:10.371213 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 01:05:10.371225 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 01:05:10.371237 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 01:05:10.371252 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 01:05:10.371265 systemd[1]: Reached target machines.target - Containers. May 15 01:05:10.371328 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 01:05:10.371345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 01:05:10.371358 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 01:05:10.371371 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 01:05:10.371384 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 01:05:10.371397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 01:05:10.371411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 01:05:10.371428 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 01:05:10.371441 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 01:05:10.371454 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 01:05:10.371467 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 01:05:10.371481 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 01:05:10.371494 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 01:05:10.371508 systemd[1]: Stopped systemd-fsck-usr.service. May 15 01:05:10.371523 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 01:05:10.371538 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 01:05:10.371552 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 01:05:10.371565 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 01:05:10.371578 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 01:05:10.371591 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 01:05:10.371605 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 01:05:10.371620 systemd[1]: verity-setup.service: Deactivated successfully. May 15 01:05:10.371633 systemd[1]: Stopped verity-setup.service. May 15 01:05:10.371647 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:05:10.371660 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 01:05:10.371674 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 01:05:10.371689 systemd[1]: Mounted media.mount - External Media Directory. May 15 01:05:10.371703 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 01:05:10.371716 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 01:05:10.371729 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 01:05:10.371742 kernel: loop: module loaded May 15 01:05:10.371758 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 01:05:10.371771 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 01:05:10.371784 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 01:05:10.371802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 01:05:10.371816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 01:05:10.371829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 01:05:10.371843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 01:05:10.371856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 01:05:10.371869 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 01:05:10.371882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 01:05:10.371896 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 01:05:10.371911 kernel: fuse: init (API version 7.39) May 15 01:05:10.371924 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 01:05:10.371937 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 01:05:10.371951 kernel: ACPI: bus type drm_connector registered May 15 01:05:10.371964 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 01:05:10.371995 systemd-journald[1087]: Collecting audit messages is disabled. May 15 01:05:10.372023 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 01:05:10.372037 systemd-journald[1087]: Journal started May 15 01:05:10.372069 systemd-journald[1087]: Runtime Journal (/run/log/journal/ba9d54c60db64ed28f9e30457ddd1379) is 8M, max 78.2M, 70.2M free. May 15 01:05:09.977992 systemd[1]: Queued start job for default target multi-user.target. May 15 01:05:09.986731 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 01:05:09.987197 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 01:05:10.376322 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 01:05:10.381296 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 01:05:10.387301 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 01:05:10.399316 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 01:05:10.404319 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 01:05:10.409930 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 01:05:10.409982 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 01:05:10.425304 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 01:05:10.425382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 01:05:10.428302 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 01:05:10.434308 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 01:05:10.439309 systemd[1]: Started systemd-journald.service - Journal Service. May 15 01:05:10.442421 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 01:05:10.443223 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 01:05:10.443411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 01:05:10.447528 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 01:05:10.447680 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 01:05:10.448563 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 01:05:10.449347 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 01:05:10.449965 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 01:05:10.450695 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 01:05:10.477378 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 01:05:10.482300 kernel: loop0: detected capacity change from 0 to 109808 May 15 01:05:10.482904 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 01:05:10.494178 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 01:05:10.505496 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 01:05:10.509859 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 01:05:10.510618 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 01:05:10.512066 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 01:05:10.516183 systemd-journald[1087]: Time spent on flushing to /var/log/journal/ba9d54c60db64ed28f9e30457ddd1379 is 34.813ms for 967 entries. May 15 01:05:10.516183 systemd-journald[1087]: System Journal (/var/log/journal/ba9d54c60db64ed28f9e30457ddd1379) is 8M, max 584.8M, 576.8M free. May 15 01:05:10.567905 systemd-journald[1087]: Received client request to flush runtime journal. May 15 01:05:10.567958 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 01:05:10.517579 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 01:05:10.519666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 01:05:10.561717 udevadm[1147]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 01:05:10.570116 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 01:05:10.606821 kernel: loop1: detected capacity change from 0 to 218376 May 15 01:05:10.614934 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 01:05:10.644869 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 01:05:10.652425 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 01:05:10.655292 kernel: loop2: detected capacity change from 0 to 8 May 15 01:05:10.685470 kernel: loop3: detected capacity change from 0 to 151640 May 15 01:05:10.694752 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. May 15 01:05:10.694773 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. May 15 01:05:10.700792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 01:05:10.745313 kernel: loop4: detected capacity change from 0 to 109808 May 15 01:05:10.795306 kernel: loop5: detected capacity change from 0 to 218376 May 15 01:05:10.841305 kernel: loop6: detected capacity change from 0 to 8 May 15 01:05:10.844299 kernel: loop7: detected capacity change from 0 to 151640 May 15 01:05:10.902869 (sd-merge)[1166]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 15 01:05:10.903411 (sd-merge)[1166]: Merged extensions into '/usr'. May 15 01:05:10.915356 systemd[1]: Reload requested from client PID 1119 ('systemd-sysext') (unit systemd-sysext.service)... May 15 01:05:10.915379 systemd[1]: Reloading... May 15 01:05:10.999846 zram_generator::config[1190]: No configuration found. May 15 01:05:11.312232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:05:11.400501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 01:05:11.400579 systemd[1]: Reloading finished in 484 ms. May 15 01:05:11.418906 ldconfig[1115]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 01:05:11.428631 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 01:05:11.429706 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 01:05:11.430632 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 01:05:11.444526 systemd[1]: Starting ensure-sysext.service... May 15 01:05:11.448391 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 01:05:11.451512 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 01:05:11.476751 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... May 15 01:05:11.476884 systemd[1]: Reloading... May 15 01:05:11.502467 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 01:05:11.502737 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 01:05:11.503601 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 01:05:11.503901 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 15 01:05:11.503960 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 15 01:05:11.512806 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. May 15 01:05:11.512957 systemd-tmpfiles[1252]: Skipping /boot May 15 01:05:11.522964 systemd-udevd[1253]: Using default interface naming scheme 'v255'. May 15 01:05:11.531426 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. May 15 01:05:11.531535 systemd-tmpfiles[1252]: Skipping /boot May 15 01:05:11.565310 zram_generator::config[1282]: No configuration found. May 15 01:05:11.681565 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1295) May 15 01:05:11.753319 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 01:05:11.760321 kernel: ACPI: button: Power Button [PWRF] May 15 01:05:11.779322 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 15 01:05:11.820294 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 01:05:11.827587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:05:11.864304 kernel: mousedev: PS/2 mouse device common for all mice May 15 01:05:11.890848 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 15 01:05:11.890959 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 15 01:05:11.898320 kernel: Console: switching to colour dummy device 80x25 May 15 01:05:11.898407 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 15 01:05:11.898428 kernel: [drm] features: -context_init May 15 01:05:11.898444 kernel: [drm] number of scanouts: 1 May 15 01:05:11.898462 kernel: [drm] number of cap sets: 0 May 15 01:05:11.902040 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 15 01:05:11.914033 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 15 01:05:11.914096 kernel: Console: switching to colour frame buffer device 160x50 May 15 01:05:11.922313 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 15 01:05:11.954951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 01:05:11.958244 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 01:05:11.958349 systemd[1]: Reloading finished in 481 ms. May 15 01:05:11.973807 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 01:05:11.979854 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 01:05:12.024370 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 01:05:12.026838 systemd[1]: Finished ensure-sysext.service. May 15 01:05:12.041238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:05:12.042655 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 01:05:12.050414 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 01:05:12.050673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 01:05:12.053497 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 01:05:12.058756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 01:05:12.063846 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 01:05:12.066734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 01:05:12.072491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 01:05:12.073662 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 01:05:12.076594 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 01:05:12.077661 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 01:05:12.079521 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 01:05:12.082400 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 01:05:12.086188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 01:05:12.104433 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 01:05:12.110817 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 01:05:12.113770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 01:05:12.114644 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:05:12.120581 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 01:05:12.125042 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 01:05:12.126483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 01:05:12.127410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 01:05:12.144532 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 01:05:12.144757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 01:05:12.149837 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 01:05:12.150831 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 01:05:12.151054 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 01:05:12.161036 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 01:05:12.174062 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 01:05:12.177078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 01:05:12.177578 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 01:05:12.179977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 01:05:12.184811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 01:05:12.187980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 01:05:12.201749 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 01:05:12.205185 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 01:05:12.232347 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 01:05:12.237715 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 01:05:12.250312 augenrules[1420]: No rules May 15 01:05:12.256135 systemd[1]: audit-rules.service: Deactivated successfully. May 15 01:05:12.256894 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 01:05:12.260537 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 01:05:12.276708 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 01:05:12.283430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:05:12.287191 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 01:05:12.311708 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 01:05:12.320710 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 01:05:12.374802 systemd-networkd[1381]: lo: Link UP May 15 01:05:12.374811 systemd-networkd[1381]: lo: Gained carrier May 15 01:05:12.376137 systemd-networkd[1381]: Enumeration completed May 15 01:05:12.376239 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 01:05:12.376530 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 01:05:12.376535 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 01:05:12.380509 systemd-networkd[1381]: eth0: Link UP May 15 01:05:12.380514 systemd-networkd[1381]: eth0: Gained carrier May 15 01:05:12.380530 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 01:05:12.383450 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 01:05:12.395355 systemd-networkd[1381]: eth0: DHCPv4 address 172.24.4.204/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 15 01:05:12.396143 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 01:05:12.409037 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 01:05:12.409921 systemd[1]: Reached target time-set.target - System Time Set. May 15 01:05:12.430604 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 01:05:12.443660 systemd-resolved[1383]: Positive Trust Anchors: May 15 01:05:12.443677 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 01:05:12.443718 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 01:05:12.448414 systemd-resolved[1383]: Using system hostname 'ci-4284-0-0-n-df1b790171.novalocal'. May 15 01:05:12.449993 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 01:05:12.451167 systemd[1]: Reached target network.target - Network. May 15 01:05:12.451833 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 01:05:12.454927 systemd[1]: Reached target sysinit.target - System Initialization. May 15 01:05:12.455741 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 01:05:12.457202 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 01:05:12.459707 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 01:05:12.462063 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 01:05:12.464375 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 01:05:12.466689 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 01:05:12.466791 systemd[1]: Reached target paths.target - Path Units. May 15 01:05:12.469246 systemd[1]: Reached target timers.target - Timer Units. May 15 01:05:12.474334 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 01:05:12.481201 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 01:05:12.486130 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 01:05:12.492770 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 01:05:12.494945 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 01:05:12.510084 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 01:05:12.512021 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 01:05:12.516599 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 01:05:12.518616 systemd[1]: Reached target sockets.target - Socket Units. May 15 01:05:12.520794 systemd[1]: Reached target basic.target - Basic System. May 15 01:05:12.523050 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 01:05:12.523163 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 01:05:12.525355 systemd[1]: Starting containerd.service - containerd container runtime... May 15 01:05:12.529397 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 01:05:12.534640 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 01:05:12.546712 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 01:05:12.551491 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 01:05:12.552229 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 01:05:12.555637 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 01:05:12.562190 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 01:05:12.568206 jq[1448]: false May 15 01:05:12.571638 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 01:05:12.577582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 01:05:12.587567 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 01:05:12.594373 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 01:05:12.596630 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 01:05:12.598554 systemd[1]: Starting update-engine.service - Update Engine... May 15 01:05:12.606304 dbus-daemon[1447]: [system] SELinux support is enabled May 15 01:05:12.607568 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 01:05:12.609076 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 01:05:12.622429 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 01:05:12.623045 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 01:05:12.623356 systemd[1]: motdgen.service: Deactivated successfully. May 15 01:05:12.623948 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 01:05:12.636678 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 01:05:12.636924 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 01:05:12.645052 update_engine[1463]: I20250515 01:05:12.644650 1463 main.cc:92] Flatcar Update Engine starting May 15 01:05:12.648300 update_engine[1463]: I20250515 01:05:12.647173 1463 update_check_scheduler.cc:74] Next update check in 7m12s May 15 01:05:12.649556 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 01:05:12.649599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 01:05:12.651862 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 01:05:12.651896 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 01:05:12.652610 systemd[1]: Started update-engine.service - Update Engine. May 15 01:05:12.652759 extend-filesystems[1449]: Found loop4 May 15 01:05:12.662008 extend-filesystems[1449]: Found loop5 May 15 01:05:12.662008 extend-filesystems[1449]: Found loop6 May 15 01:05:12.662008 extend-filesystems[1449]: Found loop7 May 15 01:05:12.662008 extend-filesystems[1449]: Found vda May 15 01:05:12.662008 extend-filesystems[1449]: Found vda1 May 15 01:05:12.662008 extend-filesystems[1449]: Found vda2 May 15 01:05:12.662008 extend-filesystems[1449]: Found vda3 May 15 01:05:12.662008 extend-filesystems[1449]: Found usr May 15 01:05:12.662008 extend-filesystems[1449]: Found vda4 May 15 01:05:12.662008 extend-filesystems[1449]: Found vda6 May 15 01:05:12.662008 extend-filesystems[1449]: Found vda7 May 15 01:05:12.662008 extend-filesystems[1449]: Found vda9 May 15 01:05:12.662008 extend-filesystems[1449]: Checking size of /dev/vda9 May 15 01:05:12.772881 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 15 01:05:12.677305 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 01:05:12.773048 extend-filesystems[1449]: Resized partition /dev/vda9 May 15 01:05:12.704504 systemd-timesyncd[1384]: Contacted time server 64.142.54.12:123 (0.flatcar.pool.ntp.org). May 15 01:05:12.780880 extend-filesystems[1485]: resize2fs 1.47.2 (1-Jan-2025) May 15 01:05:12.799382 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 15 01:05:12.704568 systemd-timesyncd[1384]: Initial clock synchronization to Thu 2025-05-15 01:05:13.000815 UTC. May 15 01:05:12.852947 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1306) May 15 01:05:12.853025 jq[1464]: true May 15 01:05:12.723774 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 01:05:12.853450 tar[1466]: linux-amd64/LICENSE May 15 01:05:12.853450 tar[1466]: linux-amd64/helm May 15 01:05:12.750212 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 01:05:12.853783 jq[1481]: true May 15 01:05:12.825595 systemd-logind[1462]: New seat seat0. May 15 01:05:12.839186 systemd-logind[1462]: Watching system buttons on /dev/input/event1 (Power Button) May 15 01:05:12.859121 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 01:05:12.859121 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 01:05:12.859121 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 15 01:05:12.839203 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 01:05:12.883960 extend-filesystems[1449]: Resized filesystem in /dev/vda9 May 15 01:05:12.840357 systemd[1]: Started systemd-logind.service - User Login Management. May 15 01:05:12.858062 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 01:05:12.859414 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 01:05:12.991378 bash[1508]: Updated "/home/core/.ssh/authorized_keys" May 15 01:05:12.993432 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 01:05:13.005170 systemd[1]: Starting sshkeys.service... May 15 01:05:13.023463 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 01:05:13.047258 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 01:05:13.057696 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 01:05:13.076812 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 01:05:13.130670 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 01:05:13.139125 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 01:05:13.146754 systemd[1]: Started sshd@0-172.24.4.204:22-172.24.4.1:59498.service - OpenSSH per-connection server daemon (172.24.4.1:59498). May 15 01:05:13.166630 systemd[1]: issuegen.service: Deactivated successfully. May 15 01:05:13.166923 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 01:05:13.180862 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 01:05:13.228669 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 01:05:13.237106 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 01:05:13.243952 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 01:05:13.246162 systemd[1]: Reached target getty.target - Login Prompts. May 15 01:05:13.380405 containerd[1482]: time="2025-05-15T01:05:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 01:05:13.382342 containerd[1482]: time="2025-05-15T01:05:13.381344869Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 15 01:05:13.396969 containerd[1482]: time="2025-05-15T01:05:13.396921620Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.679µs" May 15 01:05:13.397106 containerd[1482]: time="2025-05-15T01:05:13.397089045Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 01:05:13.397174 containerd[1482]: time="2025-05-15T01:05:13.397158378Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 01:05:13.397447 containerd[1482]: time="2025-05-15T01:05:13.397427387Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 01:05:13.397537 containerd[1482]: time="2025-05-15T01:05:13.397520970Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 01:05:13.397615 containerd[1482]: time="2025-05-15T01:05:13.397600579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 01:05:13.397741 containerd[1482]: time="2025-05-15T01:05:13.397720885Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 01:05:13.397801 containerd[1482]: time="2025-05-15T01:05:13.397787111Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 01:05:13.398110 containerd[1482]: time="2025-05-15T01:05:13.398085274Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 01:05:13.398173 containerd[1482]: time="2025-05-15T01:05:13.398158680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 01:05:13.398245 containerd[1482]: time="2025-05-15T01:05:13.398228647Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 01:05:13.398348 containerd[1482]: time="2025-05-15T01:05:13.398296452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 01:05:13.398497 containerd[1482]: time="2025-05-15T01:05:13.398476887Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 01:05:13.398794 containerd[1482]: time="2025-05-15T01:05:13.398772566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 01:05:13.398894 containerd[1482]: time="2025-05-15T01:05:13.398874306Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 01:05:13.398957 containerd[1482]: time="2025-05-15T01:05:13.398943036Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 01:05:13.399053 containerd[1482]: time="2025-05-15T01:05:13.399034479Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 01:05:13.399450 containerd[1482]: time="2025-05-15T01:05:13.399428605Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 01:05:13.399575 containerd[1482]: time="2025-05-15T01:05:13.399556569Z" level=info msg="metadata content store policy set" policy=shared May 15 01:05:13.605697 tar[1466]: linux-amd64/README.md May 15 01:05:13.633133 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 01:05:13.681470 containerd[1482]: time="2025-05-15T01:05:13.681228620Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 01:05:13.681684 containerd[1482]: time="2025-05-15T01:05:13.681403724Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 01:05:13.681684 containerd[1482]: time="2025-05-15T01:05:13.681535979Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 01:05:13.681684 containerd[1482]: time="2025-05-15T01:05:13.681571429Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 01:05:13.681684 containerd[1482]: time="2025-05-15T01:05:13.681639080Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 01:05:13.681684 containerd[1482]: time="2025-05-15T01:05:13.681675725Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 01:05:13.682019 containerd[1482]: time="2025-05-15T01:05:13.681727582Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 01:05:13.682019 containerd[1482]: time="2025-05-15T01:05:13.681772529Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 01:05:13.682019 containerd[1482]: time="2025-05-15T01:05:13.681803751Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 01:05:13.682019 containerd[1482]: time="2025-05-15T01:05:13.681839493Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 01:05:13.682019 containerd[1482]: time="2025-05-15T01:05:13.681867733Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 01:05:13.682019 containerd[1482]: time="2025-05-15T01:05:13.681899921Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 01:05:13.682378 containerd[1482]: time="2025-05-15T01:05:13.682170323Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 01:05:13.682378 containerd[1482]: time="2025-05-15T01:05:13.682228279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 01:05:13.682378 containerd[1482]: time="2025-05-15T01:05:13.682261599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 01:05:13.682378 containerd[1482]: time="2025-05-15T01:05:13.682290234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 01:05:13.682378 containerd[1482]: time="2025-05-15T01:05:13.682370872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 01:05:13.682679 containerd[1482]: time="2025-05-15T01:05:13.682404941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 01:05:13.682679 containerd[1482]: time="2025-05-15T01:05:13.682438667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 01:05:13.682679 containerd[1482]: time="2025-05-15T01:05:13.682469847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 01:05:13.682679 containerd[1482]: time="2025-05-15T01:05:13.682501059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 01:05:13.682679 containerd[1482]: time="2025-05-15T01:05:13.682532427Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 01:05:13.682679 containerd[1482]: time="2025-05-15T01:05:13.682561467Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 01:05:13.683008 containerd[1482]: time="2025-05-15T01:05:13.682690927Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 01:05:13.683008 containerd[1482]: time="2025-05-15T01:05:13.682727885Z" level=info msg="Start snapshots syncer" May 15 01:05:13.683008 containerd[1482]: time="2025-05-15T01:05:13.682775876Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 01:05:13.683635 containerd[1482]: time="2025-05-15T01:05:13.683491564Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 01:05:13.684102 containerd[1482]: time="2025-05-15T01:05:13.683643248Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 01:05:13.684102 containerd[1482]: time="2025-05-15T01:05:13.683912372Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 01:05:13.684376 containerd[1482]: time="2025-05-15T01:05:13.684234235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 01:05:13.684584 containerd[1482]: time="2025-05-15T01:05:13.684396912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 01:05:13.684584 containerd[1482]: time="2025-05-15T01:05:13.684439273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 01:05:13.684584 containerd[1482]: time="2025-05-15T01:05:13.684472396Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 01:05:13.684584 containerd[1482]: time="2025-05-15T01:05:13.684506558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 01:05:13.684584 containerd[1482]: time="2025-05-15T01:05:13.684570873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 01:05:13.684584 containerd[1482]: time="2025-05-15T01:05:13.684602989Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.684659562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.684710162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.684745800Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.684850729Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.684985321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.685021500Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.685054270Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.685084993Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.685111571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.685140362Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.685181652Z" level=info msg="runtime interface created" May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.685196822Z" level=info msg="created NRI interface" May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.685219036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 01:05:13.685330 containerd[1482]: time="2025-05-15T01:05:13.685250414Z" level=info msg="Connect containerd service" May 15 01:05:13.687667 containerd[1482]: time="2025-05-15T01:05:13.685381640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 01:05:13.687667 containerd[1482]: time="2025-05-15T01:05:13.687394391Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 01:05:13.964061 containerd[1482]: time="2025-05-15T01:05:13.963817168Z" level=info msg="Start subscribing containerd event" May 15 01:05:13.964061 containerd[1482]: time="2025-05-15T01:05:13.963949340Z" level=info msg="Start recovering state" May 15 01:05:13.964348 containerd[1482]: time="2025-05-15T01:05:13.963961329Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 01:05:13.964348 containerd[1482]: time="2025-05-15T01:05:13.964198577Z" level=info msg="Start event monitor" May 15 01:05:13.964425 containerd[1482]: time="2025-05-15T01:05:13.964365669Z" level=info msg="Start cni network conf syncer for default" May 15 01:05:13.964425 containerd[1482]: time="2025-05-15T01:05:13.964393858Z" level=info msg="Start streaming server" May 15 01:05:13.964662 containerd[1482]: time="2025-05-15T01:05:13.964476552Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 01:05:13.964662 containerd[1482]: time="2025-05-15T01:05:13.964272439Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 01:05:13.964662 containerd[1482]: time="2025-05-15T01:05:13.964513062Z" level=info msg="runtime interface starting up..." May 15 01:05:13.964662 containerd[1482]: time="2025-05-15T01:05:13.964545791Z" level=info msg="starting plugins..." May 15 01:05:13.964662 containerd[1482]: time="2025-05-15T01:05:13.964567621Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 01:05:13.965355 containerd[1482]: time="2025-05-15T01:05:13.964822561Z" level=info msg="containerd successfully booted in 0.584911s" May 15 01:05:13.965248 systemd[1]: Started containerd.service - containerd container runtime. May 15 01:05:14.293600 systemd-networkd[1381]: eth0: Gained IPv6LL May 15 01:05:14.298496 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 01:05:14.303371 systemd[1]: Reached target network-online.target - Network is Online. May 15 01:05:14.312889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:05:14.321687 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 01:05:14.395145 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 01:05:14.564638 sshd[1530]: Accepted publickey for core from 172.24.4.1 port 59498 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:05:14.568686 sshd-session[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:05:14.580235 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 01:05:14.587784 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 01:05:14.612757 systemd-logind[1462]: New session 1 of user core. May 15 01:05:14.638929 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 01:05:14.649450 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 01:05:14.669816 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 01:05:14.675324 systemd-logind[1462]: New session c1 of user core. May 15 01:05:14.854693 systemd[1572]: Queued start job for default target default.target. May 15 01:05:14.861292 systemd[1572]: Created slice app.slice - User Application Slice. May 15 01:05:14.861344 systemd[1572]: Reached target paths.target - Paths. May 15 01:05:14.861393 systemd[1572]: Reached target timers.target - Timers. May 15 01:05:14.865406 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 01:05:14.876390 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 01:05:14.877362 systemd[1572]: Reached target sockets.target - Sockets. May 15 01:05:14.877405 systemd[1572]: Reached target basic.target - Basic System. May 15 01:05:14.877445 systemd[1572]: Reached target default.target - Main User Target. May 15 01:05:14.877477 systemd[1572]: Startup finished in 189ms. May 15 01:05:14.877636 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 01:05:14.891587 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 01:05:15.405049 systemd[1]: Started sshd@1-172.24.4.204:22-172.24.4.1:34274.service - OpenSSH per-connection server daemon (172.24.4.1:34274). May 15 01:05:16.429597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:05:16.448222 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 01:05:17.007578 sshd[1583]: Accepted publickey for core from 172.24.4.1 port 34274 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:05:17.011477 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:05:17.033640 systemd-logind[1462]: New session 2 of user core. May 15 01:05:17.042764 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 01:05:17.655354 sshd[1597]: Connection closed by 172.24.4.1 port 34274 May 15 01:05:17.657574 sshd-session[1583]: pam_unix(sshd:session): session closed for user core May 15 01:05:17.672860 systemd[1]: sshd@1-172.24.4.204:22-172.24.4.1:34274.service: Deactivated successfully. May 15 01:05:17.678679 systemd[1]: session-2.scope: Deactivated successfully. May 15 01:05:17.681413 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. May 15 01:05:17.688606 systemd-logind[1462]: Removed session 2. May 15 01:05:17.694256 systemd[1]: Started sshd@2-172.24.4.204:22-172.24.4.1:34286.service - OpenSSH per-connection server daemon (172.24.4.1:34286). May 15 01:05:17.851905 kubelet[1592]: E0515 01:05:17.851651 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:05:17.857038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:05:17.857655 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:05:17.858969 systemd[1]: kubelet.service: Consumed 2.060s CPU time, 254.5M memory peak. May 15 01:05:18.414127 login[1536]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 01:05:18.417180 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 01:05:18.428739 systemd-logind[1462]: New session 4 of user core. May 15 01:05:18.447882 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 01:05:18.458015 systemd-logind[1462]: New session 3 of user core. May 15 01:05:18.464487 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 01:05:19.205980 sshd[1603]: Accepted publickey for core from 172.24.4.1 port 34286 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:05:19.209458 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:05:19.222653 systemd-logind[1462]: New session 5 of user core. May 15 01:05:19.233941 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 01:05:19.691714 coreos-metadata[1446]: May 15 01:05:19.691 WARN failed to locate config-drive, using the metadata service API instead May 15 01:05:19.758813 coreos-metadata[1446]: May 15 01:05:19.758 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 15 01:05:19.933232 coreos-metadata[1446]: May 15 01:05:19.932 INFO Fetch successful May 15 01:05:19.933232 coreos-metadata[1446]: May 15 01:05:19.933 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 15 01:05:19.948505 coreos-metadata[1446]: May 15 01:05:19.948 INFO Fetch successful May 15 01:05:19.948505 coreos-metadata[1446]: May 15 01:05:19.948 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 15 01:05:19.959595 coreos-metadata[1446]: May 15 01:05:19.959 INFO Fetch successful May 15 01:05:19.959595 coreos-metadata[1446]: May 15 01:05:19.959 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 15 01:05:19.974064 coreos-metadata[1446]: May 15 01:05:19.974 INFO Fetch successful May 15 01:05:19.974064 coreos-metadata[1446]: May 15 01:05:19.974 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 15 01:05:19.989453 coreos-metadata[1446]: May 15 01:05:19.989 INFO Fetch successful May 15 01:05:19.989453 coreos-metadata[1446]: May 15 01:05:19.989 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 15 01:05:20.003815 coreos-metadata[1446]: May 15 01:05:20.003 INFO Fetch successful May 15 01:05:20.078539 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 01:05:20.081147 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 01:05:20.090424 sshd[1633]: Connection closed by 172.24.4.1 port 34286 May 15 01:05:20.091756 sshd-session[1603]: pam_unix(sshd:session): session closed for user core May 15 01:05:20.107058 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. May 15 01:05:20.110630 systemd[1]: sshd@2-172.24.4.204:22-172.24.4.1:34286.service: Deactivated successfully. May 15 01:05:20.118039 systemd[1]: session-5.scope: Deactivated successfully. May 15 01:05:20.123092 systemd-logind[1462]: Removed session 5. May 15 01:05:20.185754 coreos-metadata[1518]: May 15 01:05:20.185 WARN failed to locate config-drive, using the metadata service API instead May 15 01:05:20.232865 coreos-metadata[1518]: May 15 01:05:20.232 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 15 01:05:20.242851 coreos-metadata[1518]: May 15 01:05:20.242 INFO Fetch successful May 15 01:05:20.243112 coreos-metadata[1518]: May 15 01:05:20.243 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 15 01:05:20.254256 coreos-metadata[1518]: May 15 01:05:20.254 INFO Fetch successful May 15 01:05:20.260579 unknown[1518]: wrote ssh authorized keys file for user: core May 15 01:05:20.332886 update-ssh-keys[1648]: Updated "/home/core/.ssh/authorized_keys" May 15 01:05:20.338006 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 01:05:20.344383 systemd[1]: Finished sshkeys.service. May 15 01:05:20.355390 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 01:05:20.356701 systemd[1]: Startup finished in 1.217s (kernel) + 16.376s (initrd) + 11.219s (userspace) = 28.813s. May 15 01:05:28.111661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 01:05:28.122920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:05:28.519000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:05:28.528779 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 01:05:28.681590 kubelet[1659]: E0515 01:05:28.681422 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:05:28.690525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:05:28.690999 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:05:28.692184 systemd[1]: kubelet.service: Consumed 441ms CPU time, 101.7M memory peak. May 15 01:05:30.198681 systemd[1]: Started sshd@3-172.24.4.204:22-172.24.4.1:39226.service - OpenSSH per-connection server daemon (172.24.4.1:39226). May 15 01:05:31.374246 sshd[1667]: Accepted publickey for core from 172.24.4.1 port 39226 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:05:31.377905 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:05:31.392767 systemd-logind[1462]: New session 6 of user core. May 15 01:05:31.407714 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 01:05:32.017677 sshd[1669]: Connection closed by 172.24.4.1 port 39226 May 15 01:05:32.017477 sshd-session[1667]: pam_unix(sshd:session): session closed for user core May 15 01:05:32.034322 systemd[1]: sshd@3-172.24.4.204:22-172.24.4.1:39226.service: Deactivated successfully. May 15 01:05:32.038079 systemd[1]: session-6.scope: Deactivated successfully. May 15 01:05:32.040075 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. May 15 01:05:32.045951 systemd[1]: Started sshd@4-172.24.4.204:22-172.24.4.1:39234.service - OpenSSH per-connection server daemon (172.24.4.1:39234). May 15 01:05:32.048636 systemd-logind[1462]: Removed session 6. May 15 01:05:33.172535 sshd[1674]: Accepted publickey for core from 172.24.4.1 port 39234 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:05:33.175710 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:05:33.191323 systemd-logind[1462]: New session 7 of user core. May 15 01:05:33.198692 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 01:05:33.817358 sshd[1677]: Connection closed by 172.24.4.1 port 39234 May 15 01:05:33.817812 sshd-session[1674]: pam_unix(sshd:session): session closed for user core May 15 01:05:33.870058 systemd[1]: sshd@4-172.24.4.204:22-172.24.4.1:39234.service: Deactivated successfully. May 15 01:05:33.878153 systemd[1]: session-7.scope: Deactivated successfully. May 15 01:05:33.885158 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. May 15 01:05:33.892085 systemd[1]: Started sshd@5-172.24.4.204:22-172.24.4.1:41108.service - OpenSSH per-connection server daemon (172.24.4.1:41108). May 15 01:05:33.895425 systemd-logind[1462]: Removed session 7. May 15 01:05:35.163364 sshd[1682]: Accepted publickey for core from 172.24.4.1 port 41108 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:05:35.167033 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:05:35.182116 systemd-logind[1462]: New session 8 of user core. May 15 01:05:35.191686 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 01:05:35.807330 sshd[1685]: Connection closed by 172.24.4.1 port 41108 May 15 01:05:35.808651 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 15 01:05:35.826082 systemd[1]: sshd@5-172.24.4.204:22-172.24.4.1:41108.service: Deactivated successfully. May 15 01:05:35.830107 systemd[1]: session-8.scope: Deactivated successfully. May 15 01:05:35.834656 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. May 15 01:05:35.837881 systemd[1]: Started sshd@6-172.24.4.204:22-172.24.4.1:41116.service - OpenSSH per-connection server daemon (172.24.4.1:41116). May 15 01:05:35.840248 systemd-logind[1462]: Removed session 8. May 15 01:05:37.232505 sshd[1690]: Accepted publickey for core from 172.24.4.1 port 41116 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:05:37.237083 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:05:37.249917 systemd-logind[1462]: New session 9 of user core. May 15 01:05:37.260637 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 01:05:37.698870 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 01:05:37.699666 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 01:05:37.731713 sudo[1694]: pam_unix(sudo:session): session closed for user root May 15 01:05:37.927336 sshd[1693]: Connection closed by 172.24.4.1 port 41116 May 15 01:05:37.930427 sshd-session[1690]: pam_unix(sshd:session): session closed for user core May 15 01:05:37.944428 systemd[1]: sshd@6-172.24.4.204:22-172.24.4.1:41116.service: Deactivated successfully. May 15 01:05:37.948991 systemd[1]: session-9.scope: Deactivated successfully. May 15 01:05:37.951097 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. May 15 01:05:37.956924 systemd[1]: Started sshd@7-172.24.4.204:22-172.24.4.1:41132.service - OpenSSH per-connection server daemon (172.24.4.1:41132). May 15 01:05:37.959537 systemd-logind[1462]: Removed session 9. May 15 01:05:38.942812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 01:05:38.948987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:05:39.295695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:05:39.308555 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 01:05:39.386132 sshd[1699]: Accepted publickey for core from 172.24.4.1 port 41132 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:05:39.391855 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:05:39.409085 systemd-logind[1462]: New session 10 of user core. May 15 01:05:39.414113 kubelet[1709]: E0515 01:05:39.412792 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:05:39.418658 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 01:05:39.419375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:05:39.419735 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:05:39.421077 systemd[1]: kubelet.service: Consumed 331ms CPU time, 101.7M memory peak. May 15 01:05:39.820679 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 01:05:39.822454 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 01:05:39.832498 sudo[1720]: pam_unix(sudo:session): session closed for user root May 15 01:05:39.846820 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 01:05:39.847671 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 01:05:39.876345 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 01:05:39.973416 augenrules[1742]: No rules May 15 01:05:39.977140 systemd[1]: audit-rules.service: Deactivated successfully. May 15 01:05:39.977898 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 01:05:39.980232 sudo[1719]: pam_unix(sudo:session): session closed for user root May 15 01:05:40.260692 sshd[1718]: Connection closed by 172.24.4.1 port 41132 May 15 01:05:40.264064 sshd-session[1699]: pam_unix(sshd:session): session closed for user core May 15 01:05:40.278124 systemd[1]: sshd@7-172.24.4.204:22-172.24.4.1:41132.service: Deactivated successfully. May 15 01:05:40.282133 systemd[1]: session-10.scope: Deactivated successfully. May 15 01:05:40.284637 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. May 15 01:05:40.289037 systemd[1]: Started sshd@8-172.24.4.204:22-172.24.4.1:41148.service - OpenSSH per-connection server daemon (172.24.4.1:41148). May 15 01:05:40.292762 systemd-logind[1462]: Removed session 10. May 15 01:05:41.770571 sshd[1750]: Accepted publickey for core from 172.24.4.1 port 41148 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:05:41.773712 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:05:41.786951 systemd-logind[1462]: New session 11 of user core. May 15 01:05:41.793633 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 01:05:42.252053 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 01:05:42.252811 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 01:05:43.621760 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 01:05:43.630665 (dockerd)[1772]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 01:05:44.128068 dockerd[1772]: time="2025-05-15T01:05:44.127475422Z" level=info msg="Starting up" May 15 01:05:44.130190 dockerd[1772]: time="2025-05-15T01:05:44.130052332Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 01:05:44.276567 dockerd[1772]: time="2025-05-15T01:05:44.276089277Z" level=info msg="Loading containers: start." May 15 01:05:44.526410 kernel: Initializing XFRM netlink socket May 15 01:05:44.640665 systemd-networkd[1381]: docker0: Link UP May 15 01:05:44.694772 dockerd[1772]: time="2025-05-15T01:05:44.694646964Z" level=info msg="Loading containers: done." May 15 01:05:44.730381 dockerd[1772]: time="2025-05-15T01:05:44.729700571Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 01:05:44.730381 dockerd[1772]: time="2025-05-15T01:05:44.729912263Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 15 01:05:44.730381 dockerd[1772]: time="2025-05-15T01:05:44.730136806Z" level=info msg="Daemon has completed initialization" May 15 01:05:44.737623 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3688508014-merged.mount: Deactivated successfully. May 15 01:05:44.785509 dockerd[1772]: time="2025-05-15T01:05:44.785147224Z" level=info msg="API listen on /run/docker.sock" May 15 01:05:44.785700 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 01:05:47.166964 containerd[1482]: time="2025-05-15T01:05:47.166634059Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 01:05:47.958624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739536279.mount: Deactivated successfully. May 15 01:05:49.470230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 01:05:49.476896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:05:49.698332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:05:49.707114 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 01:05:49.796397 kubelet[2032]: E0515 01:05:49.795356 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:05:49.799436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:05:49.799684 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:05:49.800296 systemd[1]: kubelet.service: Consumed 232ms CPU time, 105.8M memory peak. May 15 01:05:50.030322 containerd[1482]: time="2025-05-15T01:05:50.030214950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:50.032312 containerd[1482]: time="2025-05-15T01:05:50.031957759Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" May 15 01:05:50.034395 containerd[1482]: time="2025-05-15T01:05:50.034343876Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:50.039578 containerd[1482]: time="2025-05-15T01:05:50.039051076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:50.040807 containerd[1482]: time="2025-05-15T01:05:50.040749239Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.873150575s" May 15 01:05:50.040883 containerd[1482]: time="2025-05-15T01:05:50.040848395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 01:05:50.042153 containerd[1482]: time="2025-05-15T01:05:50.042007839Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 01:05:52.307202 containerd[1482]: time="2025-05-15T01:05:52.306124678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:52.307736 containerd[1482]: time="2025-05-15T01:05:52.307685164Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" May 15 01:05:52.309168 containerd[1482]: time="2025-05-15T01:05:52.309131894Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:52.312326 containerd[1482]: time="2025-05-15T01:05:52.312298911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:52.313495 containerd[1482]: time="2025-05-15T01:05:52.313447449Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.271135179s" May 15 01:05:52.313563 containerd[1482]: time="2025-05-15T01:05:52.313500473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 01:05:52.314045 containerd[1482]: time="2025-05-15T01:05:52.314004323Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 01:05:54.296380 containerd[1482]: time="2025-05-15T01:05:54.295222335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:54.297996 containerd[1482]: time="2025-05-15T01:05:54.297947841Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" May 15 01:05:54.299668 containerd[1482]: time="2025-05-15T01:05:54.299609455Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:54.302893 containerd[1482]: time="2025-05-15T01:05:54.302866823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:54.304027 containerd[1482]: time="2025-05-15T01:05:54.303972491Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.989913723s" May 15 01:05:54.304027 containerd[1482]: time="2025-05-15T01:05:54.304019538Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 01:05:54.306177 containerd[1482]: time="2025-05-15T01:05:54.306099555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 01:05:55.808103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588852789.mount: Deactivated successfully. May 15 01:05:56.430937 containerd[1482]: time="2025-05-15T01:05:56.430223799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:56.432533 containerd[1482]: time="2025-05-15T01:05:56.432492063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" May 15 01:05:56.434004 containerd[1482]: time="2025-05-15T01:05:56.433977015Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:56.436381 containerd[1482]: time="2025-05-15T01:05:56.436347498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:56.437672 containerd[1482]: time="2025-05-15T01:05:56.436999376Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.130865994s" May 15 01:05:56.437672 containerd[1482]: time="2025-05-15T01:05:56.437410225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 01:05:56.438448 containerd[1482]: time="2025-05-15T01:05:56.438291697Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 01:05:57.058882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628468630.mount: Deactivated successfully. May 15 01:05:57.559786 update_engine[1463]: I20250515 01:05:57.558576 1463 update_attempter.cc:509] Updating boot flags... May 15 01:05:57.619351 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2113) May 15 01:05:57.747474 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2114) May 15 01:05:58.921353 containerd[1482]: time="2025-05-15T01:05:58.920500542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:58.923965 containerd[1482]: time="2025-05-15T01:05:58.923748464Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 15 01:05:58.924540 containerd[1482]: time="2025-05-15T01:05:58.924106311Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:58.936669 containerd[1482]: time="2025-05-15T01:05:58.936610582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:05:58.939430 containerd[1482]: time="2025-05-15T01:05:58.939093679Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.500686725s" May 15 01:05:58.939430 containerd[1482]: time="2025-05-15T01:05:58.939246315Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 01:05:58.941788 containerd[1482]: time="2025-05-15T01:05:58.941763216Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 01:05:59.541428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228334497.mount: Deactivated successfully. May 15 01:05:59.557773 containerd[1482]: time="2025-05-15T01:05:59.557628201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 01:05:59.559888 containerd[1482]: time="2025-05-15T01:05:59.559667839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 15 01:05:59.561978 containerd[1482]: time="2025-05-15T01:05:59.561838503Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 01:05:59.565229 containerd[1482]: time="2025-05-15T01:05:59.565163259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 01:05:59.567764 containerd[1482]: time="2025-05-15T01:05:59.566075083Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 624.185178ms" May 15 01:05:59.567764 containerd[1482]: time="2025-05-15T01:05:59.566129322Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 01:05:59.568629 containerd[1482]: time="2025-05-15T01:05:59.568578334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 01:05:59.970967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 01:05:59.979681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:06:00.265058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:06:00.280902 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 01:06:00.395522 kubelet[2136]: E0515 01:06:00.395357 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:06:00.398384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:06:00.398734 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:06:00.399546 systemd[1]: kubelet.service: Consumed 343ms CPU time, 101.4M memory peak. May 15 01:06:00.690900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4118390742.mount: Deactivated successfully. May 15 01:06:03.640661 containerd[1482]: time="2025-05-15T01:06:03.640570874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:06:03.642213 containerd[1482]: time="2025-05-15T01:06:03.642148967Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 15 01:06:03.644303 containerd[1482]: time="2025-05-15T01:06:03.644197422Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:06:03.647812 containerd[1482]: time="2025-05-15T01:06:03.647753840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:06:03.650045 containerd[1482]: time="2025-05-15T01:06:03.648935668Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.079793412s" May 15 01:06:03.650045 containerd[1482]: time="2025-05-15T01:06:03.648999915Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 01:06:07.827982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:06:07.828983 systemd[1]: kubelet.service: Consumed 343ms CPU time, 101.4M memory peak. May 15 01:06:07.837129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:06:07.889232 systemd[1]: Reload requested from client PID 2225 ('systemctl') (unit session-11.scope)... May 15 01:06:07.889293 systemd[1]: Reloading... May 15 01:06:08.020342 zram_generator::config[2274]: No configuration found. May 15 01:06:08.240725 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:06:08.370166 systemd[1]: Reloading finished in 480 ms. May 15 01:06:08.427844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:06:08.432137 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:06:08.434551 systemd[1]: kubelet.service: Deactivated successfully. May 15 01:06:08.434891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:06:08.434951 systemd[1]: kubelet.service: Consumed 155ms CPU time, 91.8M memory peak. May 15 01:06:08.436904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:06:08.575852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:06:08.587668 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 01:06:08.691881 kubelet[2340]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:06:08.691881 kubelet[2340]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 01:06:08.691881 kubelet[2340]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:06:08.732907 kubelet[2340]: I0515 01:06:08.692003 2340 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 01:06:09.300893 kubelet[2340]: I0515 01:06:09.300808 2340 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 01:06:09.300893 kubelet[2340]: I0515 01:06:09.300892 2340 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 01:06:09.302341 kubelet[2340]: I0515 01:06:09.302063 2340 server.go:954] "Client rotation is on, will bootstrap in background" May 15 01:06:10.165266 kubelet[2340]: E0515 01:06:10.165094 2340 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.204:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.204:6443: connect: connection refused" logger="UnhandledError" May 15 01:06:10.167557 kubelet[2340]: I0515 01:06:10.167471 2340 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 01:06:10.197133 kubelet[2340]: I0515 01:06:10.197072 2340 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 01:06:10.208394 kubelet[2340]: I0515 01:06:10.207533 2340 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 01:06:10.211555 kubelet[2340]: I0515 01:06:10.211424 2340 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 01:06:10.212063 kubelet[2340]: I0515 01:06:10.211538 2340 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-df1b790171.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 01:06:10.212764 kubelet[2340]: I0515 01:06:10.212095 2340 topology_manager.go:138] "Creating topology manager with none policy" May 15 01:06:10.212764 kubelet[2340]: I0515 01:06:10.212125 2340 container_manager_linux.go:304] "Creating device plugin manager" May 15 01:06:10.212764 kubelet[2340]: I0515 01:06:10.212580 2340 state_mem.go:36] "Initialized new in-memory state store" May 15 01:06:10.223088 kubelet[2340]: I0515 01:06:10.222993 2340 kubelet.go:446] "Attempting to sync node with API server" May 15 01:06:10.223088 kubelet[2340]: I0515 01:06:10.223067 2340 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 01:06:10.223642 kubelet[2340]: I0515 01:06:10.223150 2340 kubelet.go:352] "Adding apiserver pod source" May 15 01:06:10.223642 kubelet[2340]: I0515 01:06:10.223213 2340 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 01:06:10.239893 kubelet[2340]: W0515 01:06:10.239135 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-df1b790171.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.204:6443: connect: connection refused May 15 01:06:10.239893 kubelet[2340]: E0515 01:06:10.239349 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-df1b790171.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.204:6443: connect: connection refused" logger="UnhandledError" May 15 01:06:10.239893 kubelet[2340]: I0515 01:06:10.239655 2340 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 01:06:10.243625 kubelet[2340]: I0515 01:06:10.243222 2340 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 01:06:10.246259 kubelet[2340]: W0515 01:06:10.246176 2340 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 01:06:10.251172 kubelet[2340]: I0515 01:06:10.251114 2340 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 01:06:10.251312 kubelet[2340]: I0515 01:06:10.251210 2340 server.go:1287] "Started kubelet" May 15 01:06:10.265851 kubelet[2340]: I0515 01:06:10.265721 2340 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 01:06:10.270346 kubelet[2340]: I0515 01:06:10.268472 2340 server.go:490] "Adding debug handlers to kubelet server" May 15 01:06:10.270346 kubelet[2340]: W0515 01:06:10.269217 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.204:6443: connect: connection refused May 15 01:06:10.270346 kubelet[2340]: E0515 01:06:10.269368 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.204:6443: connect: connection refused" logger="UnhandledError" May 15 01:06:10.271057 kubelet[2340]: I0515 01:06:10.270915 2340 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 01:06:10.271602 kubelet[2340]: I0515 01:06:10.271552 2340 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 01:06:10.275473 kubelet[2340]: E0515 01:06:10.271993 2340 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.204:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.204:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-df1b790171.novalocal.183f8ddf3519dc59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-df1b790171.novalocal,UID:ci-4284-0-0-n-df1b790171.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-df1b790171.novalocal,},FirstTimestamp:2025-05-15 01:06:10.251152473 +0000 UTC m=+1.633932377,LastTimestamp:2025-05-15 01:06:10.251152473 +0000 UTC m=+1.633932377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-df1b790171.novalocal,}" May 15 01:06:10.277065 kubelet[2340]: I0515 01:06:10.277028 2340 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 01:06:10.284850 kubelet[2340]: I0515 01:06:10.277929 2340 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 01:06:10.285031 kubelet[2340]: I0515 01:06:10.285014 2340 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 01:06:10.286020 kubelet[2340]: I0515 01:06:10.285926 2340 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 01:06:10.286335 kubelet[2340]: I0515 01:06:10.286230 2340 reconciler.go:26] "Reconciler: start to sync state" May 15 01:06:10.287599 kubelet[2340]: E0515 01:06:10.286829 2340 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" May 15 01:06:10.288790 kubelet[2340]: W0515 01:06:10.288565 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.204:6443: connect: connection refused May 15 01:06:10.288997 kubelet[2340]: E0515 01:06:10.288942 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.204:6443: connect: connection refused" logger="UnhandledError" May 15 01:06:10.290052 kubelet[2340]: E0515 01:06:10.289797 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-df1b790171.novalocal?timeout=10s\": dial tcp 172.24.4.204:6443: connect: connection refused" interval="200ms" May 15 01:06:10.290736 kubelet[2340]: I0515 01:06:10.290693 2340 factory.go:221] Registration of the systemd container factory successfully May 15 01:06:10.291367 kubelet[2340]: I0515 01:06:10.290926 2340 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 01:06:10.293038 kubelet[2340]: E0515 01:06:10.291620 2340 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 01:06:10.294977 kubelet[2340]: I0515 01:06:10.294953 2340 factory.go:221] Registration of the containerd container factory successfully May 15 01:06:10.310015 kubelet[2340]: I0515 01:06:10.309987 2340 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 01:06:10.310354 kubelet[2340]: I0515 01:06:10.310265 2340 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 01:06:10.310466 kubelet[2340]: I0515 01:06:10.310453 2340 state_mem.go:36] "Initialized new in-memory state store" May 15 01:06:10.317252 kubelet[2340]: I0515 01:06:10.317231 2340 policy_none.go:49] "None policy: Start" May 15 01:06:10.317445 kubelet[2340]: I0515 01:06:10.317428 2340 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 01:06:10.317575 kubelet[2340]: I0515 01:06:10.317562 2340 state_mem.go:35] "Initializing new in-memory state store" May 15 01:06:10.321139 kubelet[2340]: I0515 01:06:10.321074 2340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 01:06:10.322708 kubelet[2340]: I0515 01:06:10.322680 2340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 01:06:10.322997 kubelet[2340]: I0515 01:06:10.322981 2340 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 01:06:10.324535 kubelet[2340]: I0515 01:06:10.324516 2340 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 01:06:10.324638 kubelet[2340]: I0515 01:06:10.324626 2340 kubelet.go:2388] "Starting kubelet main sync loop" May 15 01:06:10.324810 kubelet[2340]: E0515 01:06:10.324787 2340 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 01:06:10.326322 kubelet[2340]: W0515 01:06:10.326248 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.204:6443: connect: connection refused May 15 01:06:10.326425 kubelet[2340]: E0515 01:06:10.326346 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.204:6443: connect: connection refused" logger="UnhandledError" May 15 01:06:10.332517 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 01:06:10.344837 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 01:06:10.348394 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 01:06:10.356439 kubelet[2340]: I0515 01:06:10.356380 2340 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 01:06:10.356707 kubelet[2340]: I0515 01:06:10.356676 2340 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 01:06:10.356777 kubelet[2340]: I0515 01:06:10.356712 2340 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 01:06:10.357674 kubelet[2340]: I0515 01:06:10.357638 2340 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 01:06:10.359246 kubelet[2340]: E0515 01:06:10.359113 2340 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 01:06:10.359246 kubelet[2340]: E0515 01:06:10.359186 2340 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" May 15 01:06:10.452948 systemd[1]: Created slice kubepods-burstable-podd507d33b63dd2d0c9e82d6cecc8dc0f5.slice - libcontainer container kubepods-burstable-podd507d33b63dd2d0c9e82d6cecc8dc0f5.slice. May 15 01:06:10.462649 kubelet[2340]: I0515 01:06:10.462543 2340 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.463462 kubelet[2340]: E0515 01:06:10.463386 2340 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.204:6443/api/v1/nodes\": dial tcp 172.24.4.204:6443: connect: connection refused" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.468554 kubelet[2340]: E0515 01:06:10.468493 2340 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.479620 systemd[1]: Created slice kubepods-burstable-pod63cb137597026a41dff61060e9a87661.slice - libcontainer container kubepods-burstable-pod63cb137597026a41dff61060e9a87661.slice. May 15 01:06:10.484995 kubelet[2340]: E0515 01:06:10.484886 2340 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.490876 systemd[1]: Created slice kubepods-burstable-pod6eeb71f4bc42d156d6772ac0173c3d1b.slice - libcontainer container kubepods-burstable-pod6eeb71f4bc42d156d6772ac0173c3d1b.slice. May 15 01:06:10.499347 kubelet[2340]: E0515 01:06:10.498018 2340 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.500052 kubelet[2340]: E0515 01:06:10.499988 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-df1b790171.novalocal?timeout=10s\": dial tcp 172.24.4.204:6443: connect: connection refused" interval="400ms" May 15 01:06:10.587348 kubelet[2340]: I0515 01:06:10.587182 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d507d33b63dd2d0c9e82d6cecc8dc0f5-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"d507d33b63dd2d0c9e82d6cecc8dc0f5\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.587837 kubelet[2340]: I0515 01:06:10.587782 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d507d33b63dd2d0c9e82d6cecc8dc0f5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"d507d33b63dd2d0c9e82d6cecc8dc0f5\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.588255 kubelet[2340]: I0515 01:06:10.588160 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.588645 kubelet[2340]: I0515 01:06:10.588602 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63cb137597026a41dff61060e9a87661-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"63cb137597026a41dff61060e9a87661\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.588918 kubelet[2340]: I0515 01:06:10.588865 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d507d33b63dd2d0c9e82d6cecc8dc0f5-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"d507d33b63dd2d0c9e82d6cecc8dc0f5\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.589176 kubelet[2340]: I0515 01:06:10.589138 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.589462 kubelet[2340]: I0515 01:06:10.589421 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.590112 kubelet[2340]: I0515 01:06:10.589730 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.590112 kubelet[2340]: I0515 01:06:10.590000 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.668047 kubelet[2340]: I0515 01:06:10.667914 2340 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.670071 kubelet[2340]: E0515 01:06:10.669947 2340 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.204:6443/api/v1/nodes\": dial tcp 172.24.4.204:6443: connect: connection refused" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:10.772835 containerd[1482]: time="2025-05-15T01:06:10.772365925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal,Uid:d507d33b63dd2d0c9e82d6cecc8dc0f5,Namespace:kube-system,Attempt:0,}" May 15 01:06:10.788384 containerd[1482]: time="2025-05-15T01:06:10.787478458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal,Uid:63cb137597026a41dff61060e9a87661,Namespace:kube-system,Attempt:0,}" May 15 01:06:10.801543 containerd[1482]: time="2025-05-15T01:06:10.800853807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal,Uid:6eeb71f4bc42d156d6772ac0173c3d1b,Namespace:kube-system,Attempt:0,}" May 15 01:06:10.901213 kubelet[2340]: E0515 01:06:10.901128 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-df1b790171.novalocal?timeout=10s\": dial tcp 172.24.4.204:6443: connect: connection refused" interval="800ms" May 15 01:06:10.907624 containerd[1482]: time="2025-05-15T01:06:10.906216160Z" level=info msg="connecting to shim 6edc5b687f3b79882894ca224d3869ed9ed3e52c65ef6371abe44245df1693f5" address="unix:///run/containerd/s/8592378fdb62e66087e69eefc41213c0a947ef0945de2fc4262c2bc1453f00f0" namespace=k8s.io protocol=ttrpc version=3 May 15 01:06:10.908666 containerd[1482]: time="2025-05-15T01:06:10.908638809Z" level=info msg="connecting to shim 604c90b90f48fbe28573647ac389da3fd3b92c341770517fcedd6b7f387c87f2" address="unix:///run/containerd/s/ccb647427cae9c4f9cf98381459625a8a8fcb51135853df46801b0ebe4b9a06f" namespace=k8s.io protocol=ttrpc version=3 May 15 01:06:10.917730 containerd[1482]: time="2025-05-15T01:06:10.917658601Z" level=info msg="connecting to shim b1c1dba16899fd5fd435ac2233ad053414b5caa1f805f150496f60201bb22ba9" address="unix:///run/containerd/s/728b342f2863a642a7b3e710b688dfd1151ea216b0d13ab32802393faa18400c" namespace=k8s.io protocol=ttrpc version=3 May 15 01:06:10.956402 systemd[1]: Started cri-containerd-6edc5b687f3b79882894ca224d3869ed9ed3e52c65ef6371abe44245df1693f5.scope - libcontainer container 6edc5b687f3b79882894ca224d3869ed9ed3e52c65ef6371abe44245df1693f5. May 15 01:06:10.968904 systemd[1]: Started cri-containerd-b1c1dba16899fd5fd435ac2233ad053414b5caa1f805f150496f60201bb22ba9.scope - libcontainer container b1c1dba16899fd5fd435ac2233ad053414b5caa1f805f150496f60201bb22ba9. May 15 01:06:10.973348 systemd[1]: Started cri-containerd-604c90b90f48fbe28573647ac389da3fd3b92c341770517fcedd6b7f387c87f2.scope - libcontainer container 604c90b90f48fbe28573647ac389da3fd3b92c341770517fcedd6b7f387c87f2. May 15 01:06:11.062453 containerd[1482]: time="2025-05-15T01:06:11.062308190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal,Uid:d507d33b63dd2d0c9e82d6cecc8dc0f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6edc5b687f3b79882894ca224d3869ed9ed3e52c65ef6371abe44245df1693f5\"" May 15 01:06:11.067310 containerd[1482]: time="2025-05-15T01:06:11.067243246Z" level=info msg="CreateContainer within sandbox \"6edc5b687f3b79882894ca224d3869ed9ed3e52c65ef6371abe44245df1693f5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 01:06:11.081976 containerd[1482]: time="2025-05-15T01:06:11.081903278Z" level=info msg="Container 16f937ec5215c4a3893e3d5512a15a2e0122fa07d7bb261389bece0cbe1d5773: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:11.089878 kubelet[2340]: I0515 01:06:11.089184 2340 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:11.089878 kubelet[2340]: E0515 01:06:11.089693 2340 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.204:6443/api/v1/nodes\": dial tcp 172.24.4.204:6443: connect: connection refused" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:11.099547 containerd[1482]: time="2025-05-15T01:06:11.099398006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal,Uid:6eeb71f4bc42d156d6772ac0173c3d1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1c1dba16899fd5fd435ac2233ad053414b5caa1f805f150496f60201bb22ba9\"" May 15 01:06:11.101685 containerd[1482]: time="2025-05-15T01:06:11.101522294Z" level=info msg="CreateContainer within sandbox \"6edc5b687f3b79882894ca224d3869ed9ed3e52c65ef6371abe44245df1693f5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"16f937ec5215c4a3893e3d5512a15a2e0122fa07d7bb261389bece0cbe1d5773\"" May 15 01:06:11.102627 containerd[1482]: time="2025-05-15T01:06:11.102564757Z" level=info msg="StartContainer for \"16f937ec5215c4a3893e3d5512a15a2e0122fa07d7bb261389bece0cbe1d5773\"" May 15 01:06:11.104218 containerd[1482]: time="2025-05-15T01:06:11.104153223Z" level=info msg="connecting to shim 16f937ec5215c4a3893e3d5512a15a2e0122fa07d7bb261389bece0cbe1d5773" address="unix:///run/containerd/s/8592378fdb62e66087e69eefc41213c0a947ef0945de2fc4262c2bc1453f00f0" protocol=ttrpc version=3 May 15 01:06:11.104629 containerd[1482]: time="2025-05-15T01:06:11.104322972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal,Uid:63cb137597026a41dff61060e9a87661,Namespace:kube-system,Attempt:0,} returns sandbox id \"604c90b90f48fbe28573647ac389da3fd3b92c341770517fcedd6b7f387c87f2\"" May 15 01:06:11.105521 containerd[1482]: time="2025-05-15T01:06:11.105487116Z" level=info msg="CreateContainer within sandbox \"b1c1dba16899fd5fd435ac2233ad053414b5caa1f805f150496f60201bb22ba9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 01:06:11.109257 containerd[1482]: time="2025-05-15T01:06:11.108467003Z" level=info msg="CreateContainer within sandbox \"604c90b90f48fbe28573647ac389da3fd3b92c341770517fcedd6b7f387c87f2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 01:06:11.128530 containerd[1482]: time="2025-05-15T01:06:11.128486823Z" level=info msg="Container 3d9cf190aff7f0c3e6a1dbf92fe477c095ba7e06ff1c39d9e408820ad11edbd9: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:11.130676 systemd[1]: Started cri-containerd-16f937ec5215c4a3893e3d5512a15a2e0122fa07d7bb261389bece0cbe1d5773.scope - libcontainer container 16f937ec5215c4a3893e3d5512a15a2e0122fa07d7bb261389bece0cbe1d5773. May 15 01:06:11.138063 containerd[1482]: time="2025-05-15T01:06:11.137995203Z" level=info msg="Container 838f1732c67dc2dc13b5cc940fb8dd9869a72ebae0973476e8d4c36a8440f2cc: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:11.150356 containerd[1482]: time="2025-05-15T01:06:11.150242514Z" level=info msg="CreateContainer within sandbox \"604c90b90f48fbe28573647ac389da3fd3b92c341770517fcedd6b7f387c87f2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d9cf190aff7f0c3e6a1dbf92fe477c095ba7e06ff1c39d9e408820ad11edbd9\"" May 15 01:06:11.153286 containerd[1482]: time="2025-05-15T01:06:11.153207259Z" level=info msg="StartContainer for \"3d9cf190aff7f0c3e6a1dbf92fe477c095ba7e06ff1c39d9e408820ad11edbd9\"" May 15 01:06:11.157970 containerd[1482]: time="2025-05-15T01:06:11.155864582Z" level=info msg="connecting to shim 3d9cf190aff7f0c3e6a1dbf92fe477c095ba7e06ff1c39d9e408820ad11edbd9" address="unix:///run/containerd/s/ccb647427cae9c4f9cf98381459625a8a8fcb51135853df46801b0ebe4b9a06f" protocol=ttrpc version=3 May 15 01:06:11.170754 containerd[1482]: time="2025-05-15T01:06:11.170046911Z" level=info msg="CreateContainer within sandbox \"b1c1dba16899fd5fd435ac2233ad053414b5caa1f805f150496f60201bb22ba9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"838f1732c67dc2dc13b5cc940fb8dd9869a72ebae0973476e8d4c36a8440f2cc\"" May 15 01:06:11.172151 containerd[1482]: time="2025-05-15T01:06:11.171947419Z" level=info msg="StartContainer for \"838f1732c67dc2dc13b5cc940fb8dd9869a72ebae0973476e8d4c36a8440f2cc\"" May 15 01:06:11.179467 containerd[1482]: time="2025-05-15T01:06:11.179393718Z" level=info msg="connecting to shim 838f1732c67dc2dc13b5cc940fb8dd9869a72ebae0973476e8d4c36a8440f2cc" address="unix:///run/containerd/s/728b342f2863a642a7b3e710b688dfd1151ea216b0d13ab32802393faa18400c" protocol=ttrpc version=3 May 15 01:06:11.194846 systemd[1]: Started cri-containerd-3d9cf190aff7f0c3e6a1dbf92fe477c095ba7e06ff1c39d9e408820ad11edbd9.scope - libcontainer container 3d9cf190aff7f0c3e6a1dbf92fe477c095ba7e06ff1c39d9e408820ad11edbd9. May 15 01:06:11.208520 systemd[1]: Started cri-containerd-838f1732c67dc2dc13b5cc940fb8dd9869a72ebae0973476e8d4c36a8440f2cc.scope - libcontainer container 838f1732c67dc2dc13b5cc940fb8dd9869a72ebae0973476e8d4c36a8440f2cc. May 15 01:06:11.251561 containerd[1482]: time="2025-05-15T01:06:11.251513577Z" level=info msg="StartContainer for \"16f937ec5215c4a3893e3d5512a15a2e0122fa07d7bb261389bece0cbe1d5773\" returns successfully" May 15 01:06:11.283125 kubelet[2340]: W0515 01:06:11.282733 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.204:6443: connect: connection refused May 15 01:06:11.283125 kubelet[2340]: E0515 01:06:11.282890 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.204:6443: connect: connection refused" logger="UnhandledError" May 15 01:06:11.309540 containerd[1482]: time="2025-05-15T01:06:11.308980310Z" level=info msg="StartContainer for \"3d9cf190aff7f0c3e6a1dbf92fe477c095ba7e06ff1c39d9e408820ad11edbd9\" returns successfully" May 15 01:06:11.330069 containerd[1482]: time="2025-05-15T01:06:11.328515353Z" level=info msg="StartContainer for \"838f1732c67dc2dc13b5cc940fb8dd9869a72ebae0973476e8d4c36a8440f2cc\" returns successfully" May 15 01:06:11.342327 kubelet[2340]: E0515 01:06:11.342199 2340 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:11.345929 kubelet[2340]: E0515 01:06:11.345806 2340 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:11.349303 kubelet[2340]: E0515 01:06:11.349018 2340 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:11.366239 kubelet[2340]: W0515 01:06:11.366111 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-df1b790171.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.204:6443: connect: connection refused May 15 01:06:11.366239 kubelet[2340]: E0515 01:06:11.366196 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-df1b790171.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.204:6443: connect: connection refused" logger="UnhandledError" May 15 01:06:11.892984 kubelet[2340]: I0515 01:06:11.892915 2340 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:12.368379 kubelet[2340]: E0515 01:06:12.368196 2340 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:12.371758 kubelet[2340]: E0515 01:06:12.371351 2340 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:13.926113 kubelet[2340]: E0515 01:06:13.926017 2340 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284-0-0-n-df1b790171.novalocal\" not found" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:13.996616 kubelet[2340]: E0515 01:06:13.996255 2340 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4284-0-0-n-df1b790171.novalocal.183f8ddf3519dc59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-df1b790171.novalocal,UID:ci-4284-0-0-n-df1b790171.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-df1b790171.novalocal,},FirstTimestamp:2025-05-15 01:06:10.251152473 +0000 UTC m=+1.633932377,LastTimestamp:2025-05-15 01:06:10.251152473 +0000 UTC m=+1.633932377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-df1b790171.novalocal,}" May 15 01:06:14.054112 kubelet[2340]: E0515 01:06:14.053947 2340 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4284-0-0-n-df1b790171.novalocal.183f8ddf3783153c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-df1b790171.novalocal,UID:ci-4284-0-0-n-df1b790171.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-df1b790171.novalocal,},FirstTimestamp:2025-05-15 01:06:10.291602748 +0000 UTC m=+1.674382652,LastTimestamp:2025-05-15 01:06:10.291602748 +0000 UTC m=+1.674382652,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-df1b790171.novalocal,}" May 15 01:06:14.068637 kubelet[2340]: I0515 01:06:14.068573 2340 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:14.068908 kubelet[2340]: E0515 01:06:14.068629 2340 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4284-0-0-n-df1b790171.novalocal\": node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" May 15 01:06:14.081841 kubelet[2340]: E0515 01:06:14.081728 2340 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" May 15 01:06:14.182610 kubelet[2340]: E0515 01:06:14.181960 2340 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" May 15 01:06:14.282492 kubelet[2340]: E0515 01:06:14.282377 2340 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" May 15 01:06:14.391343 kubelet[2340]: I0515 01:06:14.388170 2340 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:14.405088 kubelet[2340]: E0515 01:06:14.404979 2340 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:14.405088 kubelet[2340]: I0515 01:06:14.405051 2340 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:14.410590 kubelet[2340]: E0515 01:06:14.409946 2340 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:14.410590 kubelet[2340]: I0515 01:06:14.410095 2340 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:14.414551 kubelet[2340]: E0515 01:06:14.414449 2340 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:15.258227 kubelet[2340]: I0515 01:06:15.258136 2340 apiserver.go:52] "Watching apiserver" May 15 01:06:15.287734 kubelet[2340]: I0515 01:06:15.287580 2340 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 01:06:15.381222 kubelet[2340]: I0515 01:06:15.380716 2340 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:15.394904 kubelet[2340]: W0515 01:06:15.394018 2340 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:06:16.616915 systemd[1]: Reload requested from client PID 2603 ('systemctl') (unit session-11.scope)... May 15 01:06:16.617012 systemd[1]: Reloading... May 15 01:06:16.755404 zram_generator::config[2655]: No configuration found. May 15 01:06:16.912986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:06:17.060706 systemd[1]: Reloading finished in 442 ms. May 15 01:06:17.096073 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:06:17.096542 kubelet[2340]: I0515 01:06:17.096076 2340 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 01:06:17.116782 systemd[1]: kubelet.service: Deactivated successfully. May 15 01:06:17.117081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:06:17.117199 systemd[1]: kubelet.service: Consumed 1.502s CPU time, 124M memory peak. May 15 01:06:17.124366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:06:17.428626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:06:17.437736 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 01:06:17.510375 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:06:17.510375 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 01:06:17.510375 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:06:17.510375 kubelet[2713]: I0515 01:06:17.509804 2713 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 01:06:17.521066 kubelet[2713]: I0515 01:06:17.520924 2713 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 01:06:17.521066 kubelet[2713]: I0515 01:06:17.520959 2713 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 01:06:17.521854 kubelet[2713]: I0515 01:06:17.521373 2713 server.go:954] "Client rotation is on, will bootstrap in background" May 15 01:06:17.528815 kubelet[2713]: I0515 01:06:17.527426 2713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 01:06:17.535813 kubelet[2713]: I0515 01:06:17.534313 2713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 01:06:17.544100 kubelet[2713]: I0515 01:06:17.543849 2713 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 01:06:17.550190 kubelet[2713]: I0515 01:06:17.550155 2713 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 01:06:17.550724 kubelet[2713]: I0515 01:06:17.550677 2713 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 01:06:17.551184 kubelet[2713]: I0515 01:06:17.550718 2713 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-df1b790171.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 01:06:17.551706 kubelet[2713]: I0515 01:06:17.551229 2713 topology_manager.go:138] "Creating topology manager with none policy" May 15 01:06:17.551706 kubelet[2713]: I0515 01:06:17.551243 2713 container_manager_linux.go:304] "Creating device plugin manager" May 15 01:06:17.551706 kubelet[2713]: I0515 01:06:17.551481 2713 state_mem.go:36] "Initialized new in-memory state store" May 15 01:06:17.552337 kubelet[2713]: I0515 01:06:17.552313 2713 kubelet.go:446] "Attempting to sync node with API server" May 15 01:06:17.552542 kubelet[2713]: I0515 01:06:17.552359 2713 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 01:06:17.552542 kubelet[2713]: I0515 01:06:17.552397 2713 kubelet.go:352] "Adding apiserver pod source" May 15 01:06:17.552542 kubelet[2713]: I0515 01:06:17.552428 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 01:06:17.555859 kubelet[2713]: I0515 01:06:17.555797 2713 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 01:06:17.556684 kubelet[2713]: I0515 01:06:17.556343 2713 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 01:06:17.557095 kubelet[2713]: I0515 01:06:17.556953 2713 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 01:06:17.557095 kubelet[2713]: I0515 01:06:17.556985 2713 server.go:1287] "Started kubelet" May 15 01:06:17.563813 kubelet[2713]: I0515 01:06:17.562696 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 01:06:17.575047 kubelet[2713]: I0515 01:06:17.574977 2713 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 01:06:17.585174 kubelet[2713]: I0515 01:06:17.585036 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 01:06:17.588856 kubelet[2713]: I0515 01:06:17.588818 2713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 01:06:17.594031 kubelet[2713]: I0515 01:06:17.593981 2713 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 01:06:17.594355 kubelet[2713]: E0515 01:06:17.594324 2713 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-df1b790171.novalocal\" not found" May 15 01:06:17.608240 kubelet[2713]: I0515 01:06:17.604993 2713 server.go:490] "Adding debug handlers to kubelet server" May 15 01:06:17.608240 kubelet[2713]: I0515 01:06:17.605732 2713 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 01:06:17.608240 kubelet[2713]: I0515 01:06:17.607682 2713 reconciler.go:26] "Reconciler: start to sync state" May 15 01:06:17.608607 kubelet[2713]: I0515 01:06:17.608440 2713 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 01:06:17.610819 kubelet[2713]: I0515 01:06:17.610764 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 01:06:17.616928 kubelet[2713]: I0515 01:06:17.613468 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 01:06:17.616928 kubelet[2713]: I0515 01:06:17.613522 2713 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 01:06:17.616928 kubelet[2713]: I0515 01:06:17.613555 2713 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 01:06:17.616928 kubelet[2713]: I0515 01:06:17.613563 2713 kubelet.go:2388] "Starting kubelet main sync loop" May 15 01:06:17.616928 kubelet[2713]: E0515 01:06:17.613629 2713 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 01:06:17.617333 kubelet[2713]: I0515 01:06:17.617093 2713 factory.go:221] Registration of the systemd container factory successfully May 15 01:06:17.617333 kubelet[2713]: I0515 01:06:17.617218 2713 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 01:06:17.626240 kubelet[2713]: E0515 01:06:17.625166 2713 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 01:06:17.626705 kubelet[2713]: I0515 01:06:17.626472 2713 factory.go:221] Registration of the containerd container factory successfully May 15 01:06:17.694389 sudo[2745]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 01:06:17.696969 sudo[2745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 01:06:17.715821 kubelet[2713]: E0515 01:06:17.715588 2713 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 01:06:17.719369 kubelet[2713]: I0515 01:06:17.719327 2713 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 01:06:17.719369 kubelet[2713]: I0515 01:06:17.719357 2713 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 01:06:17.719497 kubelet[2713]: I0515 01:06:17.719390 2713 state_mem.go:36] "Initialized new in-memory state store" May 15 01:06:17.719635 kubelet[2713]: I0515 01:06:17.719609 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 01:06:17.719680 kubelet[2713]: I0515 01:06:17.719627 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 01:06:17.719680 kubelet[2713]: I0515 01:06:17.719663 2713 policy_none.go:49] "None policy: Start" May 15 01:06:17.719742 kubelet[2713]: I0515 01:06:17.719686 2713 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 01:06:17.719742 kubelet[2713]: I0515 01:06:17.719708 2713 state_mem.go:35] "Initializing new in-memory state store" May 15 01:06:17.719835 kubelet[2713]: I0515 01:06:17.719824 2713 state_mem.go:75] "Updated machine memory state" May 15 01:06:17.728000 kubelet[2713]: I0515 01:06:17.727962 2713 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 01:06:17.729287 kubelet[2713]: I0515 01:06:17.728838 2713 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 01:06:17.729287 kubelet[2713]: I0515 01:06:17.728922 2713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 01:06:17.729675 kubelet[2713]: I0515 01:06:17.729653 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 01:06:17.735609 kubelet[2713]: E0515 01:06:17.735578 2713 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 01:06:17.841717 kubelet[2713]: I0515 01:06:17.841647 2713 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:17.859210 kubelet[2713]: I0515 01:06:17.858870 2713 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:17.859210 kubelet[2713]: I0515 01:06:17.858950 2713 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:17.924356 kubelet[2713]: I0515 01:06:17.917001 2713 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:17.926911 kubelet[2713]: I0515 01:06:17.926119 2713 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:17.926911 kubelet[2713]: I0515 01:06:17.926229 2713 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:17.935730 kubelet[2713]: W0515 01:06:17.935238 2713 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:06:17.940174 kubelet[2713]: W0515 01:06:17.939875 2713 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:06:17.942045 kubelet[2713]: W0515 01:06:17.940499 2713 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:06:17.942253 kubelet[2713]: E0515 01:06:17.942173 2713 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.110192 kubelet[2713]: I0515 01:06:18.110073 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.110192 kubelet[2713]: I0515 01:06:18.110126 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d507d33b63dd2d0c9e82d6cecc8dc0f5-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"d507d33b63dd2d0c9e82d6cecc8dc0f5\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.110192 kubelet[2713]: I0515 01:06:18.110161 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.110585 kubelet[2713]: I0515 01:06:18.110250 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.110585 kubelet[2713]: I0515 01:06:18.110293 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.110585 kubelet[2713]: I0515 01:06:18.110319 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63cb137597026a41dff61060e9a87661-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"63cb137597026a41dff61060e9a87661\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.110585 kubelet[2713]: I0515 01:06:18.110339 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d507d33b63dd2d0c9e82d6cecc8dc0f5-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"d507d33b63dd2d0c9e82d6cecc8dc0f5\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.110722 kubelet[2713]: I0515 01:06:18.110365 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d507d33b63dd2d0c9e82d6cecc8dc0f5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"d507d33b63dd2d0c9e82d6cecc8dc0f5\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.110722 kubelet[2713]: I0515 01:06:18.110386 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eeb71f4bc42d156d6772ac0173c3d1b-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal\" (UID: \"6eeb71f4bc42d156d6772ac0173c3d1b\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.365988 sudo[2745]: pam_unix(sudo:session): session closed for user root May 15 01:06:18.557598 kubelet[2713]: I0515 01:06:18.555609 2713 apiserver.go:52] "Watching apiserver" May 15 01:06:18.610637 kubelet[2713]: I0515 01:06:18.610443 2713 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 01:06:18.674181 kubelet[2713]: I0515 01:06:18.673667 2713 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.701722 kubelet[2713]: W0515 01:06:18.701676 2713 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:06:18.703321 kubelet[2713]: E0515 01:06:18.702412 2713 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal" May 15 01:06:18.731320 kubelet[2713]: I0515 01:06:18.730943 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-df1b790171.novalocal" podStartSLOduration=1.730900949 podStartE2EDuration="1.730900949s" podCreationTimestamp="2025-05-15 01:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:06:18.728264566 +0000 UTC m=+1.282697546" watchObservedRunningTime="2025-05-15 01:06:18.730900949 +0000 UTC m=+1.285333900" May 15 01:06:18.778158 kubelet[2713]: I0515 01:06:18.776501 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-df1b790171.novalocal" podStartSLOduration=1.776426956 podStartE2EDuration="1.776426956s" podCreationTimestamp="2025-05-15 01:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:06:18.774551295 +0000 UTC m=+1.328984265" watchObservedRunningTime="2025-05-15 01:06:18.776426956 +0000 UTC m=+1.330859906" May 15 01:06:18.778158 kubelet[2713]: I0515 01:06:18.776662 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-df1b790171.novalocal" podStartSLOduration=3.776656289 podStartE2EDuration="3.776656289s" podCreationTimestamp="2025-05-15 01:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:06:18.758918815 +0000 UTC m=+1.313351765" watchObservedRunningTime="2025-05-15 01:06:18.776656289 +0000 UTC m=+1.331089239" May 15 01:06:20.615984 sudo[1754]: pam_unix(sudo:session): session closed for user root May 15 01:06:20.894972 sshd[1753]: Connection closed by 172.24.4.1 port 41148 May 15 01:06:20.901705 sshd-session[1750]: pam_unix(sshd:session): session closed for user core May 15 01:06:20.923804 systemd[1]: sshd@8-172.24.4.204:22-172.24.4.1:41148.service: Deactivated successfully. May 15 01:06:20.937556 systemd[1]: session-11.scope: Deactivated successfully. May 15 01:06:20.938258 systemd[1]: session-11.scope: Consumed 7.918s CPU time, 267.7M memory peak. May 15 01:06:20.946488 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. May 15 01:06:20.954388 systemd-logind[1462]: Removed session 11. May 15 01:06:22.048579 kubelet[2713]: I0515 01:06:22.048498 2713 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 01:06:22.050821 containerd[1482]: time="2025-05-15T01:06:22.050705255Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 01:06:22.051930 kubelet[2713]: I0515 01:06:22.051088 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 01:06:22.672713 systemd[1]: Created slice kubepods-besteffort-podaa0fd740_edea_4ff0_97a2_6c34d752f96c.slice - libcontainer container kubepods-besteffort-podaa0fd740_edea_4ff0_97a2_6c34d752f96c.slice. May 15 01:06:22.675493 kubelet[2713]: I0515 01:06:22.674615 2713 status_manager.go:890] "Failed to get status for pod" podUID="aa0fd740-edea-4ff0-97a2-6c34d752f96c" pod="kube-system/kube-proxy-8pq2l" err="pods \"kube-proxy-8pq2l\" is forbidden: User \"system:node:ci-4284-0-0-n-df1b790171.novalocal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-df1b790171.novalocal' and this object" May 15 01:06:22.675493 kubelet[2713]: W0515 01:06:22.674812 2713 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4284-0-0-n-df1b790171.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-df1b790171.novalocal' and this object May 15 01:06:22.675493 kubelet[2713]: E0515 01:06:22.675371 2713 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4284-0-0-n-df1b790171.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-df1b790171.novalocal' and this object" logger="UnhandledError" May 15 01:06:22.675493 kubelet[2713]: W0515 01:06:22.675425 2713 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4284-0-0-n-df1b790171.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-df1b790171.novalocal' and this object May 15 01:06:22.675761 kubelet[2713]: E0515 01:06:22.675446 2713 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4284-0-0-n-df1b790171.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-df1b790171.novalocal' and this object" logger="UnhandledError" May 15 01:06:22.693719 systemd[1]: Created slice kubepods-burstable-pode89a2e4d_62b4_4294_ace6_87ba5bd89634.slice - libcontainer container kubepods-burstable-pode89a2e4d_62b4_4294_ace6_87ba5bd89634.slice. May 15 01:06:22.842499 kubelet[2713]: I0515 01:06:22.842364 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa0fd740-edea-4ff0-97a2-6c34d752f96c-kube-proxy\") pod \"kube-proxy-8pq2l\" (UID: \"aa0fd740-edea-4ff0-97a2-6c34d752f96c\") " pod="kube-system/kube-proxy-8pq2l" May 15 01:06:22.842833 kubelet[2713]: I0515 01:06:22.842654 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cni-path\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.843699 kubelet[2713]: I0515 01:06:22.842999 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-host-proc-sys-kernel\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.843699 kubelet[2713]: I0515 01:06:22.843133 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbwx5\" (UniqueName: \"kubernetes.io/projected/aa0fd740-edea-4ff0-97a2-6c34d752f96c-kube-api-access-lbwx5\") pod \"kube-proxy-8pq2l\" (UID: \"aa0fd740-edea-4ff0-97a2-6c34d752f96c\") " pod="kube-system/kube-proxy-8pq2l" May 15 01:06:22.843699 kubelet[2713]: I0515 01:06:22.843171 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-cgroup\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.843699 kubelet[2713]: I0515 01:06:22.843213 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-lib-modules\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.843699 kubelet[2713]: I0515 01:06:22.843237 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-run\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.843699 kubelet[2713]: I0515 01:06:22.843261 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-hostproc\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.843983 kubelet[2713]: I0515 01:06:22.843301 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-etc-cni-netd\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.843983 kubelet[2713]: I0515 01:06:22.843337 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-config-path\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.843983 kubelet[2713]: I0515 01:06:22.843360 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa0fd740-edea-4ff0-97a2-6c34d752f96c-xtables-lock\") pod \"kube-proxy-8pq2l\" (UID: \"aa0fd740-edea-4ff0-97a2-6c34d752f96c\") " pod="kube-system/kube-proxy-8pq2l" May 15 01:06:22.843983 kubelet[2713]: I0515 01:06:22.843384 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8wz6\" (UniqueName: \"kubernetes.io/projected/e89a2e4d-62b4-4294-ace6-87ba5bd89634-kube-api-access-f8wz6\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.843983 kubelet[2713]: I0515 01:06:22.843404 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-host-proc-sys-net\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.844228 kubelet[2713]: I0515 01:06:22.843440 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa0fd740-edea-4ff0-97a2-6c34d752f96c-lib-modules\") pod \"kube-proxy-8pq2l\" (UID: \"aa0fd740-edea-4ff0-97a2-6c34d752f96c\") " pod="kube-system/kube-proxy-8pq2l" May 15 01:06:22.844228 kubelet[2713]: I0515 01:06:22.843460 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-xtables-lock\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.844228 kubelet[2713]: I0515 01:06:22.843480 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e89a2e4d-62b4-4294-ace6-87ba5bd89634-clustermesh-secrets\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.844228 kubelet[2713]: I0515 01:06:22.843502 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e89a2e4d-62b4-4294-ace6-87ba5bd89634-hubble-tls\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:22.844228 kubelet[2713]: I0515 01:06:22.843523 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-bpf-maps\") pod \"cilium-xhlcb\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " pod="kube-system/cilium-xhlcb" May 15 01:06:23.040499 systemd[1]: Created slice kubepods-besteffort-pod2953288d_cd7c_45a0_b6af_08868a2e32ea.slice - libcontainer container kubepods-besteffort-pod2953288d_cd7c_45a0_b6af_08868a2e32ea.slice. May 15 01:06:23.051948 kubelet[2713]: I0515 01:06:23.051805 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vx74\" (UniqueName: \"kubernetes.io/projected/2953288d-cd7c-45a0-b6af-08868a2e32ea-kube-api-access-2vx74\") pod \"cilium-operator-6c4d7847fc-dbxbg\" (UID: \"2953288d-cd7c-45a0-b6af-08868a2e32ea\") " pod="kube-system/cilium-operator-6c4d7847fc-dbxbg" May 15 01:06:23.051948 kubelet[2713]: I0515 01:06:23.051952 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2953288d-cd7c-45a0-b6af-08868a2e32ea-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dbxbg\" (UID: \"2953288d-cd7c-45a0-b6af-08868a2e32ea\") " pod="kube-system/cilium-operator-6c4d7847fc-dbxbg" May 15 01:06:23.660530 containerd[1482]: time="2025-05-15T01:06:23.656957270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dbxbg,Uid:2953288d-cd7c-45a0-b6af-08868a2e32ea,Namespace:kube-system,Attempt:0,}" May 15 01:06:23.885005 containerd[1482]: time="2025-05-15T01:06:23.884917882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pq2l,Uid:aa0fd740-edea-4ff0-97a2-6c34d752f96c,Namespace:kube-system,Attempt:0,}" May 15 01:06:23.961131 containerd[1482]: time="2025-05-15T01:06:23.960790371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xhlcb,Uid:e89a2e4d-62b4-4294-ace6-87ba5bd89634,Namespace:kube-system,Attempt:0,}" May 15 01:06:24.175314 containerd[1482]: time="2025-05-15T01:06:24.174746474Z" level=info msg="connecting to shim db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba" address="unix:///run/containerd/s/31991f08f22b75e570fec3531e3b65ca6a3c60ccf5bcf1a2f6ed96a2caa025c1" namespace=k8s.io protocol=ttrpc version=3 May 15 01:06:24.193051 containerd[1482]: time="2025-05-15T01:06:24.192507373Z" level=info msg="connecting to shim 6a4764f1f2f26d3176251a2c40a91c8f82bf9d67a95d3ce08f7d4fd5983b0bb4" address="unix:///run/containerd/s/7f60911ac76c08bde972c5cbcda698a3b882392619412bea47b896c6b4833cb5" namespace=k8s.io protocol=ttrpc version=3 May 15 01:06:24.204357 containerd[1482]: time="2025-05-15T01:06:24.204279875Z" level=info msg="connecting to shim d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a" address="unix:///run/containerd/s/f00a749d69f883c360e7da6dcc537de662025a4852df8fb7f8765ccdfd7c673a" namespace=k8s.io protocol=ttrpc version=3 May 15 01:06:24.235694 systemd[1]: Started cri-containerd-db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba.scope - libcontainer container db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba. May 15 01:06:24.243581 systemd[1]: Started cri-containerd-6a4764f1f2f26d3176251a2c40a91c8f82bf9d67a95d3ce08f7d4fd5983b0bb4.scope - libcontainer container 6a4764f1f2f26d3176251a2c40a91c8f82bf9d67a95d3ce08f7d4fd5983b0bb4. May 15 01:06:24.262522 systemd[1]: Started cri-containerd-d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a.scope - libcontainer container d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a. May 15 01:06:24.319782 containerd[1482]: time="2025-05-15T01:06:24.319465435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pq2l,Uid:aa0fd740-edea-4ff0-97a2-6c34d752f96c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a4764f1f2f26d3176251a2c40a91c8f82bf9d67a95d3ce08f7d4fd5983b0bb4\"" May 15 01:06:24.327969 containerd[1482]: time="2025-05-15T01:06:24.327755447Z" level=info msg="CreateContainer within sandbox \"6a4764f1f2f26d3176251a2c40a91c8f82bf9d67a95d3ce08f7d4fd5983b0bb4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 01:06:24.344580 containerd[1482]: time="2025-05-15T01:06:24.342161478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xhlcb,Uid:e89a2e4d-62b4-4294-ace6-87ba5bd89634,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\"" May 15 01:06:24.360028 containerd[1482]: time="2025-05-15T01:06:24.359953989Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 01:06:24.399829 containerd[1482]: time="2025-05-15T01:06:24.399754573Z" level=info msg="Container 7afc88b2bea1334f2601ee8199e65097bbf2515f409ab736ae0857e3dfcd8507: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:24.404438 containerd[1482]: time="2025-05-15T01:06:24.404371104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dbxbg,Uid:2953288d-cd7c-45a0-b6af-08868a2e32ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\"" May 15 01:06:24.485470 containerd[1482]: time="2025-05-15T01:06:24.485297603Z" level=info msg="CreateContainer within sandbox \"6a4764f1f2f26d3176251a2c40a91c8f82bf9d67a95d3ce08f7d4fd5983b0bb4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7afc88b2bea1334f2601ee8199e65097bbf2515f409ab736ae0857e3dfcd8507\"" May 15 01:06:24.491439 containerd[1482]: time="2025-05-15T01:06:24.489861328Z" level=info msg="StartContainer for \"7afc88b2bea1334f2601ee8199e65097bbf2515f409ab736ae0857e3dfcd8507\"" May 15 01:06:24.495587 containerd[1482]: time="2025-05-15T01:06:24.495518224Z" level=info msg="connecting to shim 7afc88b2bea1334f2601ee8199e65097bbf2515f409ab736ae0857e3dfcd8507" address="unix:///run/containerd/s/7f60911ac76c08bde972c5cbcda698a3b882392619412bea47b896c6b4833cb5" protocol=ttrpc version=3 May 15 01:06:24.562761 systemd[1]: Started cri-containerd-7afc88b2bea1334f2601ee8199e65097bbf2515f409ab736ae0857e3dfcd8507.scope - libcontainer container 7afc88b2bea1334f2601ee8199e65097bbf2515f409ab736ae0857e3dfcd8507. May 15 01:06:24.643532 containerd[1482]: time="2025-05-15T01:06:24.643470117Z" level=info msg="StartContainer for \"7afc88b2bea1334f2601ee8199e65097bbf2515f409ab736ae0857e3dfcd8507\" returns successfully" May 15 01:06:24.744793 kubelet[2713]: I0515 01:06:24.744627 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8pq2l" podStartSLOduration=2.7446056629999998 podStartE2EDuration="2.744605663s" podCreationTimestamp="2025-05-15 01:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:06:24.744255495 +0000 UTC m=+7.298688445" watchObservedRunningTime="2025-05-15 01:06:24.744605663 +0000 UTC m=+7.299038623" May 15 01:06:29.907779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334616556.mount: Deactivated successfully. May 15 01:06:32.842811 containerd[1482]: time="2025-05-15T01:06:32.842493978Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:06:32.846651 containerd[1482]: time="2025-05-15T01:06:32.846466789Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 01:06:32.848008 containerd[1482]: time="2025-05-15T01:06:32.847141270Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:06:32.855501 containerd[1482]: time="2025-05-15T01:06:32.855167672Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.494818435s" May 15 01:06:32.855764 containerd[1482]: time="2025-05-15T01:06:32.855393297Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 01:06:32.860376 containerd[1482]: time="2025-05-15T01:06:32.860215074Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 01:06:32.864345 containerd[1482]: time="2025-05-15T01:06:32.863943523Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 01:06:32.900362 containerd[1482]: time="2025-05-15T01:06:32.895690232Z" level=info msg="Container ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:32.908763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696349123.mount: Deactivated successfully. May 15 01:06:32.931913 containerd[1482]: time="2025-05-15T01:06:32.931776595Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\"" May 15 01:06:32.933534 containerd[1482]: time="2025-05-15T01:06:32.933470719Z" level=info msg="StartContainer for \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\"" May 15 01:06:32.939615 containerd[1482]: time="2025-05-15T01:06:32.938980894Z" level=info msg="connecting to shim ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8" address="unix:///run/containerd/s/f00a749d69f883c360e7da6dcc537de662025a4852df8fb7f8765ccdfd7c673a" protocol=ttrpc version=3 May 15 01:06:32.991621 systemd[1]: Started cri-containerd-ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8.scope - libcontainer container ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8. May 15 01:06:33.037531 containerd[1482]: time="2025-05-15T01:06:33.037485759Z" level=info msg="StartContainer for \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\" returns successfully" May 15 01:06:33.049406 systemd[1]: cri-containerd-ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8.scope: Deactivated successfully. May 15 01:06:33.053192 containerd[1482]: time="2025-05-15T01:06:33.053107253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\" id:\"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\" pid:3123 exited_at:{seconds:1747271193 nanos:51819274}" May 15 01:06:33.053311 containerd[1482]: time="2025-05-15T01:06:33.053193944Z" level=info msg="received exit event container_id:\"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\" id:\"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\" pid:3123 exited_at:{seconds:1747271193 nanos:51819274}" May 15 01:06:33.080039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8-rootfs.mount: Deactivated successfully. May 15 01:06:34.746688 containerd[1482]: time="2025-05-15T01:06:34.746551148Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 01:06:34.891753 containerd[1482]: time="2025-05-15T01:06:34.891663774Z" level=info msg="Container fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:34.907107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122150056.mount: Deactivated successfully. May 15 01:06:35.016839 containerd[1482]: time="2025-05-15T01:06:35.016085292Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\"" May 15 01:06:35.019451 containerd[1482]: time="2025-05-15T01:06:35.017880747Z" level=info msg="StartContainer for \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\"" May 15 01:06:35.021024 containerd[1482]: time="2025-05-15T01:06:35.020945721Z" level=info msg="connecting to shim fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190" address="unix:///run/containerd/s/f00a749d69f883c360e7da6dcc537de662025a4852df8fb7f8765ccdfd7c673a" protocol=ttrpc version=3 May 15 01:06:35.067679 systemd[1]: Started cri-containerd-fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190.scope - libcontainer container fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190. May 15 01:06:35.151786 containerd[1482]: time="2025-05-15T01:06:35.151693180Z" level=info msg="StartContainer for \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\" returns successfully" May 15 01:06:35.177832 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 01:06:35.178220 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 01:06:35.179649 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 01:06:35.182824 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 01:06:35.187655 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 01:06:35.188803 systemd[1]: cri-containerd-fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190.scope: Deactivated successfully. May 15 01:06:35.194292 containerd[1482]: time="2025-05-15T01:06:35.192601806Z" level=info msg="received exit event container_id:\"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\" id:\"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\" pid:3168 exited_at:{seconds:1747271195 nanos:190618651}" May 15 01:06:35.194292 containerd[1482]: time="2025-05-15T01:06:35.193137120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\" id:\"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\" pid:3168 exited_at:{seconds:1747271195 nanos:190618651}" May 15 01:06:35.227680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 01:06:35.751066 containerd[1482]: time="2025-05-15T01:06:35.750543580Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 01:06:35.785954 containerd[1482]: time="2025-05-15T01:06:35.785830889Z" level=info msg="Container 13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:35.814863 containerd[1482]: time="2025-05-15T01:06:35.814752215Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\"" May 15 01:06:35.816838 containerd[1482]: time="2025-05-15T01:06:35.816348748Z" level=info msg="StartContainer for \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\"" May 15 01:06:35.823508 containerd[1482]: time="2025-05-15T01:06:35.823415670Z" level=info msg="connecting to shim 13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1" address="unix:///run/containerd/s/f00a749d69f883c360e7da6dcc537de662025a4852df8fb7f8765ccdfd7c673a" protocol=ttrpc version=3 May 15 01:06:35.862872 systemd[1]: Started cri-containerd-13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1.scope - libcontainer container 13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1. May 15 01:06:35.896257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190-rootfs.mount: Deactivated successfully. May 15 01:06:35.922619 systemd[1]: cri-containerd-13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1.scope: Deactivated successfully. May 15 01:06:35.927864 containerd[1482]: time="2025-05-15T01:06:35.927691124Z" level=info msg="received exit event container_id:\"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\" id:\"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\" pid:3215 exited_at:{seconds:1747271195 nanos:923934799}" May 15 01:06:35.929947 containerd[1482]: time="2025-05-15T01:06:35.929727362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\" id:\"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\" pid:3215 exited_at:{seconds:1747271195 nanos:923934799}" May 15 01:06:35.942639 containerd[1482]: time="2025-05-15T01:06:35.942305766Z" level=info msg="StartContainer for \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\" returns successfully" May 15 01:06:35.968715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1-rootfs.mount: Deactivated successfully. May 15 01:06:36.165127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133320401.mount: Deactivated successfully. May 15 01:06:36.762304 containerd[1482]: time="2025-05-15T01:06:36.762124319Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 01:06:36.792558 containerd[1482]: time="2025-05-15T01:06:36.792149345Z" level=info msg="Container 386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:36.809036 containerd[1482]: time="2025-05-15T01:06:36.808890498Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\"" May 15 01:06:36.811165 containerd[1482]: time="2025-05-15T01:06:36.810976992Z" level=info msg="StartContainer for \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\"" May 15 01:06:36.814738 containerd[1482]: time="2025-05-15T01:06:36.814409084Z" level=info msg="connecting to shim 386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7" address="unix:///run/containerd/s/f00a749d69f883c360e7da6dcc537de662025a4852df8fb7f8765ccdfd7c673a" protocol=ttrpc version=3 May 15 01:06:36.848577 systemd[1]: Started cri-containerd-386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7.scope - libcontainer container 386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7. May 15 01:06:36.894394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543320071.mount: Deactivated successfully. May 15 01:06:36.899759 systemd[1]: cri-containerd-386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7.scope: Deactivated successfully. May 15 01:06:36.909921 containerd[1482]: time="2025-05-15T01:06:36.909863301Z" level=info msg="received exit event container_id:\"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\" id:\"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\" pid:3269 exited_at:{seconds:1747271196 nanos:901354322}" May 15 01:06:36.910180 containerd[1482]: time="2025-05-15T01:06:36.909893390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\" id:\"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\" pid:3269 exited_at:{seconds:1747271196 nanos:901354322}" May 15 01:06:36.911103 containerd[1482]: time="2025-05-15T01:06:36.910929829Z" level=info msg="StartContainer for \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\" returns successfully" May 15 01:06:36.946561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7-rootfs.mount: Deactivated successfully. May 15 01:06:37.385154 containerd[1482]: time="2025-05-15T01:06:37.385064747Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:06:37.386529 containerd[1482]: time="2025-05-15T01:06:37.386473257Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 01:06:37.389293 containerd[1482]: time="2025-05-15T01:06:37.387663857Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:06:37.389390 containerd[1482]: time="2025-05-15T01:06:37.389260856Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.5289382s" May 15 01:06:37.389488 containerd[1482]: time="2025-05-15T01:06:37.389467894Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 01:06:37.393499 containerd[1482]: time="2025-05-15T01:06:37.393474219Z" level=info msg="CreateContainer within sandbox \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 01:06:37.411441 containerd[1482]: time="2025-05-15T01:06:37.411383476Z" level=info msg="Container 8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:37.420184 containerd[1482]: time="2025-05-15T01:06:37.420140570Z" level=info msg="CreateContainer within sandbox \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\"" May 15 01:06:37.421168 containerd[1482]: time="2025-05-15T01:06:37.421062423Z" level=info msg="StartContainer for \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\"" May 15 01:06:37.423413 containerd[1482]: time="2025-05-15T01:06:37.423359949Z" level=info msg="connecting to shim 8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3" address="unix:///run/containerd/s/31991f08f22b75e570fec3531e3b65ca6a3c60ccf5bcf1a2f6ed96a2caa025c1" protocol=ttrpc version=3 May 15 01:06:37.449362 systemd[1]: Started cri-containerd-8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3.scope - libcontainer container 8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3. May 15 01:06:37.491838 containerd[1482]: time="2025-05-15T01:06:37.491517514Z" level=info msg="StartContainer for \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" returns successfully" May 15 01:06:37.778357 containerd[1482]: time="2025-05-15T01:06:37.778295618Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 01:06:37.797494 containerd[1482]: time="2025-05-15T01:06:37.797428850Z" level=info msg="Container fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:37.834393 containerd[1482]: time="2025-05-15T01:06:37.832693504Z" level=info msg="CreateContainer within sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\"" May 15 01:06:37.837238 containerd[1482]: time="2025-05-15T01:06:37.837185345Z" level=info msg="StartContainer for \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\"" May 15 01:06:37.839505 containerd[1482]: time="2025-05-15T01:06:37.839043297Z" level=info msg="connecting to shim fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d" address="unix:///run/containerd/s/f00a749d69f883c360e7da6dcc537de662025a4852df8fb7f8765ccdfd7c673a" protocol=ttrpc version=3 May 15 01:06:37.894486 systemd[1]: Started cri-containerd-fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d.scope - libcontainer container fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d. May 15 01:06:37.922105 kubelet[2713]: I0515 01:06:37.921954 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dbxbg" podStartSLOduration=1.9394670189999998 podStartE2EDuration="14.921834038s" podCreationTimestamp="2025-05-15 01:06:23 +0000 UTC" firstStartedPulling="2025-05-15 01:06:24.408598158 +0000 UTC m=+6.963031118" lastFinishedPulling="2025-05-15 01:06:37.390965187 +0000 UTC m=+19.945398137" observedRunningTime="2025-05-15 01:06:37.849629528 +0000 UTC m=+20.404062498" watchObservedRunningTime="2025-05-15 01:06:37.921834038 +0000 UTC m=+20.476266988" May 15 01:06:38.000565 containerd[1482]: time="2025-05-15T01:06:38.000175633Z" level=info msg="StartContainer for \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" returns successfully" May 15 01:06:38.143470 containerd[1482]: time="2025-05-15T01:06:38.142249612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" id:\"5039ec74ca901cb2b025b39bfc765f14d8f10aad47facb04569e78f7d4bbbc09\" pid:3375 exited_at:{seconds:1747271198 nanos:141814769}" May 15 01:06:38.203165 kubelet[2713]: I0515 01:06:38.203096 2713 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 01:06:38.465496 kubelet[2713]: I0515 01:06:38.465374 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnmj\" (UniqueName: \"kubernetes.io/projected/dc378b4a-4282-4668-9956-8bb6ce084336-kube-api-access-xgnmj\") pod \"coredns-668d6bf9bc-5c4d7\" (UID: \"dc378b4a-4282-4668-9956-8bb6ce084336\") " pod="kube-system/coredns-668d6bf9bc-5c4d7" May 15 01:06:38.465496 kubelet[2713]: I0515 01:06:38.465424 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50012bfb-5ea1-42af-a450-01e9d7fc2ff8-config-volume\") pod \"coredns-668d6bf9bc-8sxsr\" (UID: \"50012bfb-5ea1-42af-a450-01e9d7fc2ff8\") " pod="kube-system/coredns-668d6bf9bc-8sxsr" May 15 01:06:38.465496 kubelet[2713]: I0515 01:06:38.465452 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whvhw\" (UniqueName: \"kubernetes.io/projected/50012bfb-5ea1-42af-a450-01e9d7fc2ff8-kube-api-access-whvhw\") pod \"coredns-668d6bf9bc-8sxsr\" (UID: \"50012bfb-5ea1-42af-a450-01e9d7fc2ff8\") " pod="kube-system/coredns-668d6bf9bc-8sxsr" May 15 01:06:38.465496 kubelet[2713]: I0515 01:06:38.465473 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc378b4a-4282-4668-9956-8bb6ce084336-config-volume\") pod \"coredns-668d6bf9bc-5c4d7\" (UID: \"dc378b4a-4282-4668-9956-8bb6ce084336\") " pod="kube-system/coredns-668d6bf9bc-5c4d7" May 15 01:06:38.473470 systemd[1]: Created slice kubepods-burstable-pod50012bfb_5ea1_42af_a450_01e9d7fc2ff8.slice - libcontainer container kubepods-burstable-pod50012bfb_5ea1_42af_a450_01e9d7fc2ff8.slice. May 15 01:06:38.483065 systemd[1]: Created slice kubepods-burstable-poddc378b4a_4282_4668_9956_8bb6ce084336.slice - libcontainer container kubepods-burstable-poddc378b4a_4282_4668_9956_8bb6ce084336.slice. May 15 01:06:38.780006 containerd[1482]: time="2025-05-15T01:06:38.779771839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sxsr,Uid:50012bfb-5ea1-42af-a450-01e9d7fc2ff8,Namespace:kube-system,Attempt:0,}" May 15 01:06:38.787352 containerd[1482]: time="2025-05-15T01:06:38.786975121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5c4d7,Uid:dc378b4a-4282-4668-9956-8bb6ce084336,Namespace:kube-system,Attempt:0,}" May 15 01:06:38.862105 kubelet[2713]: I0515 01:06:38.861155 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xhlcb" podStartSLOduration=8.35753206 podStartE2EDuration="16.861111413s" podCreationTimestamp="2025-05-15 01:06:22 +0000 UTC" firstStartedPulling="2025-05-15 01:06:24.355814299 +0000 UTC m=+6.910247249" lastFinishedPulling="2025-05-15 01:06:32.859393572 +0000 UTC m=+15.413826602" observedRunningTime="2025-05-15 01:06:38.859323541 +0000 UTC m=+21.413756511" watchObservedRunningTime="2025-05-15 01:06:38.861111413 +0000 UTC m=+21.415544363" May 15 01:06:41.385091 systemd-networkd[1381]: cilium_host: Link UP May 15 01:06:41.385356 systemd-networkd[1381]: cilium_net: Link UP May 15 01:06:41.385693 systemd-networkd[1381]: cilium_net: Gained carrier May 15 01:06:41.385973 systemd-networkd[1381]: cilium_host: Gained carrier May 15 01:06:41.499034 systemd-networkd[1381]: cilium_vxlan: Link UP May 15 01:06:41.499045 systemd-networkd[1381]: cilium_vxlan: Gained carrier May 15 01:06:41.541453 systemd-networkd[1381]: cilium_net: Gained IPv6LL May 15 01:06:41.821529 systemd-networkd[1381]: cilium_host: Gained IPv6LL May 15 01:06:41.843397 kernel: NET: Registered PF_ALG protocol family May 15 01:06:42.749772 systemd-networkd[1381]: lxc_health: Link UP May 15 01:06:42.758889 systemd-networkd[1381]: lxc_health: Gained carrier May 15 01:06:43.317575 systemd-networkd[1381]: cilium_vxlan: Gained IPv6LL May 15 01:06:43.351390 systemd-networkd[1381]: lxc88c5b5d81861: Link UP May 15 01:06:43.358780 kernel: eth0: renamed from tmpff101 May 15 01:06:43.365839 systemd-networkd[1381]: lxc88c5b5d81861: Gained carrier May 15 01:06:43.422966 kernel: eth0: renamed from tmpa2130 May 15 01:06:43.426244 systemd-networkd[1381]: lxcf02f57165c36: Link UP May 15 01:06:43.427536 systemd-networkd[1381]: lxcf02f57165c36: Gained carrier May 15 01:06:44.661861 systemd-networkd[1381]: lxc88c5b5d81861: Gained IPv6LL May 15 01:06:44.789732 systemd-networkd[1381]: lxc_health: Gained IPv6LL May 15 01:06:44.853718 systemd-networkd[1381]: lxcf02f57165c36: Gained IPv6LL May 15 01:06:46.614677 kubelet[2713]: I0515 01:06:46.612905 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:06:47.931684 containerd[1482]: time="2025-05-15T01:06:47.931474883Z" level=info msg="connecting to shim ff10118cd200e273865facf0d3fe1306d5cc46466cff8134bdaac7584e6b4983" address="unix:///run/containerd/s/cc91f10545dba810369a29f663d7481c88b94399e870a7ca5553b41da0e90871" namespace=k8s.io protocol=ttrpc version=3 May 15 01:06:47.949378 containerd[1482]: time="2025-05-15T01:06:47.946978548Z" level=info msg="connecting to shim a21300f8fc7d1aa3c55c7fdaec067b1c8902ac411233f151032601af91382ae9" address="unix:///run/containerd/s/c130739492d28fb3d494efb5da8f5f4490a2c73376502adaef55d3b7dcc1fec1" namespace=k8s.io protocol=ttrpc version=3 May 15 01:06:47.987585 systemd[1]: Started cri-containerd-ff10118cd200e273865facf0d3fe1306d5cc46466cff8134bdaac7584e6b4983.scope - libcontainer container ff10118cd200e273865facf0d3fe1306d5cc46466cff8134bdaac7584e6b4983. May 15 01:06:48.019471 systemd[1]: Started cri-containerd-a21300f8fc7d1aa3c55c7fdaec067b1c8902ac411233f151032601af91382ae9.scope - libcontainer container a21300f8fc7d1aa3c55c7fdaec067b1c8902ac411233f151032601af91382ae9. May 15 01:06:48.081562 containerd[1482]: time="2025-05-15T01:06:48.081438544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5c4d7,Uid:dc378b4a-4282-4668-9956-8bb6ce084336,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff10118cd200e273865facf0d3fe1306d5cc46466cff8134bdaac7584e6b4983\"" May 15 01:06:48.089940 containerd[1482]: time="2025-05-15T01:06:48.089878817Z" level=info msg="CreateContainer within sandbox \"ff10118cd200e273865facf0d3fe1306d5cc46466cff8134bdaac7584e6b4983\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 01:06:48.110475 containerd[1482]: time="2025-05-15T01:06:48.110418278Z" level=info msg="Container 866f6d8f56f2669314af008af413fbbe123bcb14ee85bdb62520b37c31509a6f: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:48.126780 containerd[1482]: time="2025-05-15T01:06:48.126722707Z" level=info msg="CreateContainer within sandbox \"ff10118cd200e273865facf0d3fe1306d5cc46466cff8134bdaac7584e6b4983\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"866f6d8f56f2669314af008af413fbbe123bcb14ee85bdb62520b37c31509a6f\"" May 15 01:06:48.130497 containerd[1482]: time="2025-05-15T01:06:48.129436383Z" level=info msg="StartContainer for \"866f6d8f56f2669314af008af413fbbe123bcb14ee85bdb62520b37c31509a6f\"" May 15 01:06:48.130497 containerd[1482]: time="2025-05-15T01:06:48.130426038Z" level=info msg="connecting to shim 866f6d8f56f2669314af008af413fbbe123bcb14ee85bdb62520b37c31509a6f" address="unix:///run/containerd/s/cc91f10545dba810369a29f663d7481c88b94399e870a7ca5553b41da0e90871" protocol=ttrpc version=3 May 15 01:06:48.138062 containerd[1482]: time="2025-05-15T01:06:48.137996981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sxsr,Uid:50012bfb-5ea1-42af-a450-01e9d7fc2ff8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a21300f8fc7d1aa3c55c7fdaec067b1c8902ac411233f151032601af91382ae9\"" May 15 01:06:48.142540 containerd[1482]: time="2025-05-15T01:06:48.142489436Z" level=info msg="CreateContainer within sandbox \"a21300f8fc7d1aa3c55c7fdaec067b1c8902ac411233f151032601af91382ae9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 01:06:48.153468 systemd[1]: Started cri-containerd-866f6d8f56f2669314af008af413fbbe123bcb14ee85bdb62520b37c31509a6f.scope - libcontainer container 866f6d8f56f2669314af008af413fbbe123bcb14ee85bdb62520b37c31509a6f. May 15 01:06:48.162132 containerd[1482]: time="2025-05-15T01:06:48.161966398Z" level=info msg="Container c5fca3c62b640f706a7a65da0a1a5abbef669a93b292a4cbfa8b509b79f9b07c: CDI devices from CRI Config.CDIDevices: []" May 15 01:06:48.173710 containerd[1482]: time="2025-05-15T01:06:48.173566671Z" level=info msg="CreateContainer within sandbox \"a21300f8fc7d1aa3c55c7fdaec067b1c8902ac411233f151032601af91382ae9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5fca3c62b640f706a7a65da0a1a5abbef669a93b292a4cbfa8b509b79f9b07c\"" May 15 01:06:48.175366 containerd[1482]: time="2025-05-15T01:06:48.175030623Z" level=info msg="StartContainer for \"c5fca3c62b640f706a7a65da0a1a5abbef669a93b292a4cbfa8b509b79f9b07c\"" May 15 01:06:48.176702 containerd[1482]: time="2025-05-15T01:06:48.176677032Z" level=info msg="connecting to shim c5fca3c62b640f706a7a65da0a1a5abbef669a93b292a4cbfa8b509b79f9b07c" address="unix:///run/containerd/s/c130739492d28fb3d494efb5da8f5f4490a2c73376502adaef55d3b7dcc1fec1" protocol=ttrpc version=3 May 15 01:06:48.206231 containerd[1482]: time="2025-05-15T01:06:48.206105053Z" level=info msg="StartContainer for \"866f6d8f56f2669314af008af413fbbe123bcb14ee85bdb62520b37c31509a6f\" returns successfully" May 15 01:06:48.206580 systemd[1]: Started cri-containerd-c5fca3c62b640f706a7a65da0a1a5abbef669a93b292a4cbfa8b509b79f9b07c.scope - libcontainer container c5fca3c62b640f706a7a65da0a1a5abbef669a93b292a4cbfa8b509b79f9b07c. May 15 01:06:48.262788 containerd[1482]: time="2025-05-15T01:06:48.262716584Z" level=info msg="StartContainer for \"c5fca3c62b640f706a7a65da0a1a5abbef669a93b292a4cbfa8b509b79f9b07c\" returns successfully" May 15 01:06:48.882243 kubelet[2713]: I0515 01:06:48.880217 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8sxsr" podStartSLOduration=25.880173983 podStartE2EDuration="25.880173983s" podCreationTimestamp="2025-05-15 01:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:06:48.87959975 +0000 UTC m=+31.434032750" watchObservedRunningTime="2025-05-15 01:06:48.880173983 +0000 UTC m=+31.434606983" May 15 01:06:48.957314 kubelet[2713]: I0515 01:06:48.957090 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5c4d7" podStartSLOduration=25.957064475 podStartE2EDuration="25.957064475s" podCreationTimestamp="2025-05-15 01:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:06:48.956482728 +0000 UTC m=+31.510915679" watchObservedRunningTime="2025-05-15 01:06:48.957064475 +0000 UTC m=+31.511497436" May 15 01:09:42.756640 systemd[1]: Started sshd@9-172.24.4.204:22-172.24.4.1:36440.service - OpenSSH per-connection server daemon (172.24.4.1:36440). May 15 01:09:44.193725 sshd[4045]: Accepted publickey for core from 172.24.4.1 port 36440 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:09:44.201108 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:09:44.221515 systemd-logind[1462]: New session 12 of user core. May 15 01:09:44.228752 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 01:09:45.107442 sshd[4047]: Connection closed by 172.24.4.1 port 36440 May 15 01:09:45.108780 sshd-session[4045]: pam_unix(sshd:session): session closed for user core May 15 01:09:45.115725 systemd[1]: sshd@9-172.24.4.204:22-172.24.4.1:36440.service: Deactivated successfully. May 15 01:09:45.124782 systemd[1]: session-12.scope: Deactivated successfully. May 15 01:09:45.128953 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. May 15 01:09:45.132226 systemd-logind[1462]: Removed session 12. May 15 01:09:50.131728 systemd[1]: Started sshd@10-172.24.4.204:22-172.24.4.1:48628.service - OpenSSH per-connection server daemon (172.24.4.1:48628). May 15 01:09:51.193601 sshd[4060]: Accepted publickey for core from 172.24.4.1 port 48628 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:09:51.196828 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:09:51.208840 systemd-logind[1462]: New session 13 of user core. May 15 01:09:51.218662 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 01:09:51.932178 sshd[4062]: Connection closed by 172.24.4.1 port 48628 May 15 01:09:51.934004 sshd-session[4060]: pam_unix(sshd:session): session closed for user core May 15 01:09:51.940823 systemd[1]: sshd@10-172.24.4.204:22-172.24.4.1:48628.service: Deactivated successfully. May 15 01:09:51.947439 systemd[1]: session-13.scope: Deactivated successfully. May 15 01:09:51.952923 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. May 15 01:09:51.955620 systemd-logind[1462]: Removed session 13. May 15 01:09:56.972928 systemd[1]: Started sshd@11-172.24.4.204:22-172.24.4.1:59856.service - OpenSSH per-connection server daemon (172.24.4.1:59856). May 15 01:09:58.149251 sshd[4077]: Accepted publickey for core from 172.24.4.1 port 59856 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:09:58.155650 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:09:58.186050 systemd-logind[1462]: New session 14 of user core. May 15 01:09:58.197785 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 01:09:58.987561 sshd[4079]: Connection closed by 172.24.4.1 port 59856 May 15 01:09:58.990711 sshd-session[4077]: pam_unix(sshd:session): session closed for user core May 15 01:09:59.000571 systemd[1]: sshd@11-172.24.4.204:22-172.24.4.1:59856.service: Deactivated successfully. May 15 01:09:59.008006 systemd[1]: session-14.scope: Deactivated successfully. May 15 01:09:59.010475 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. May 15 01:09:59.013383 systemd-logind[1462]: Removed session 14. May 15 01:10:04.018360 systemd[1]: Started sshd@12-172.24.4.204:22-172.24.4.1:52078.service - OpenSSH per-connection server daemon (172.24.4.1:52078). May 15 01:10:05.335631 sshd[4092]: Accepted publickey for core from 172.24.4.1 port 52078 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:05.339046 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:05.356461 systemd-logind[1462]: New session 15 of user core. May 15 01:10:05.361646 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 01:10:06.166692 sshd[4094]: Connection closed by 172.24.4.1 port 52078 May 15 01:10:06.168234 sshd-session[4092]: pam_unix(sshd:session): session closed for user core May 15 01:10:06.186261 systemd[1]: sshd@12-172.24.4.204:22-172.24.4.1:52078.service: Deactivated successfully. May 15 01:10:06.190242 systemd[1]: session-15.scope: Deactivated successfully. May 15 01:10:06.192780 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. May 15 01:10:06.199995 systemd[1]: Started sshd@13-172.24.4.204:22-172.24.4.1:52086.service - OpenSSH per-connection server daemon (172.24.4.1:52086). May 15 01:10:06.204959 systemd-logind[1462]: Removed session 15. May 15 01:10:07.246414 sshd[4105]: Accepted publickey for core from 172.24.4.1 port 52086 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:07.249457 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:07.264388 systemd-logind[1462]: New session 16 of user core. May 15 01:10:07.273654 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 01:10:08.087332 sshd[4108]: Connection closed by 172.24.4.1 port 52086 May 15 01:10:08.086830 sshd-session[4105]: pam_unix(sshd:session): session closed for user core May 15 01:10:08.105877 systemd[1]: sshd@13-172.24.4.204:22-172.24.4.1:52086.service: Deactivated successfully. May 15 01:10:08.112000 systemd[1]: session-16.scope: Deactivated successfully. May 15 01:10:08.115376 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. May 15 01:10:08.121601 systemd[1]: Started sshd@14-172.24.4.204:22-172.24.4.1:52098.service - OpenSSH per-connection server daemon (172.24.4.1:52098). May 15 01:10:08.123939 systemd-logind[1462]: Removed session 16. May 15 01:10:09.395810 sshd[4117]: Accepted publickey for core from 172.24.4.1 port 52098 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:09.399160 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:09.413378 systemd-logind[1462]: New session 17 of user core. May 15 01:10:09.419641 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 01:10:10.087355 sshd[4120]: Connection closed by 172.24.4.1 port 52098 May 15 01:10:10.088722 sshd-session[4117]: pam_unix(sshd:session): session closed for user core May 15 01:10:10.095826 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. May 15 01:10:10.097788 systemd[1]: sshd@14-172.24.4.204:22-172.24.4.1:52098.service: Deactivated successfully. May 15 01:10:10.104724 systemd[1]: session-17.scope: Deactivated successfully. May 15 01:10:10.110793 systemd-logind[1462]: Removed session 17. May 15 01:10:15.117812 systemd[1]: Started sshd@15-172.24.4.204:22-172.24.4.1:59390.service - OpenSSH per-connection server daemon (172.24.4.1:59390). May 15 01:10:16.162243 sshd[4132]: Accepted publickey for core from 172.24.4.1 port 59390 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:16.165645 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:16.182414 systemd-logind[1462]: New session 18 of user core. May 15 01:10:16.188446 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 01:10:16.908321 sshd[4137]: Connection closed by 172.24.4.1 port 59390 May 15 01:10:16.909060 sshd-session[4132]: pam_unix(sshd:session): session closed for user core May 15 01:10:16.937985 systemd[1]: sshd@15-172.24.4.204:22-172.24.4.1:59390.service: Deactivated successfully. May 15 01:10:16.943044 systemd[1]: session-18.scope: Deactivated successfully. May 15 01:10:16.949646 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. May 15 01:10:16.953407 systemd[1]: Started sshd@16-172.24.4.204:22-172.24.4.1:59406.service - OpenSSH per-connection server daemon (172.24.4.1:59406). May 15 01:10:16.957699 systemd-logind[1462]: Removed session 18. May 15 01:10:18.123631 sshd[4148]: Accepted publickey for core from 172.24.4.1 port 59406 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:18.126901 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:18.140204 systemd-logind[1462]: New session 19 of user core. May 15 01:10:18.147720 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 01:10:18.919330 sshd[4153]: Connection closed by 172.24.4.1 port 59406 May 15 01:10:18.919627 sshd-session[4148]: pam_unix(sshd:session): session closed for user core May 15 01:10:18.941116 systemd[1]: sshd@16-172.24.4.204:22-172.24.4.1:59406.service: Deactivated successfully. May 15 01:10:18.956870 systemd[1]: session-19.scope: Deactivated successfully. May 15 01:10:18.959388 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. May 15 01:10:18.966532 systemd[1]: Started sshd@17-172.24.4.204:22-172.24.4.1:59414.service - OpenSSH per-connection server daemon (172.24.4.1:59414). May 15 01:10:18.969954 systemd-logind[1462]: Removed session 19. May 15 01:10:20.131081 sshd[4162]: Accepted publickey for core from 172.24.4.1 port 59414 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:20.134726 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:20.147327 systemd-logind[1462]: New session 20 of user core. May 15 01:10:20.155608 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 01:10:22.277092 sshd[4165]: Connection closed by 172.24.4.1 port 59414 May 15 01:10:22.278682 sshd-session[4162]: pam_unix(sshd:session): session closed for user core May 15 01:10:22.305000 systemd[1]: sshd@17-172.24.4.204:22-172.24.4.1:59414.service: Deactivated successfully. May 15 01:10:22.309266 systemd[1]: session-20.scope: Deactivated successfully. May 15 01:10:22.314412 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. May 15 01:10:22.318994 systemd[1]: Started sshd@18-172.24.4.204:22-172.24.4.1:59420.service - OpenSSH per-connection server daemon (172.24.4.1:59420). May 15 01:10:22.323632 systemd-logind[1462]: Removed session 20. May 15 01:10:23.502693 sshd[4181]: Accepted publickey for core from 172.24.4.1 port 59420 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:23.505895 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:23.521420 systemd-logind[1462]: New session 21 of user core. May 15 01:10:23.531594 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 01:10:24.419830 sshd[4184]: Connection closed by 172.24.4.1 port 59420 May 15 01:10:24.421634 sshd-session[4181]: pam_unix(sshd:session): session closed for user core May 15 01:10:24.441165 systemd[1]: sshd@18-172.24.4.204:22-172.24.4.1:59420.service: Deactivated successfully. May 15 01:10:24.447063 systemd[1]: session-21.scope: Deactivated successfully. May 15 01:10:24.450241 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. May 15 01:10:24.458074 systemd[1]: Started sshd@19-172.24.4.204:22-172.24.4.1:36064.service - OpenSSH per-connection server daemon (172.24.4.1:36064). May 15 01:10:24.460723 systemd-logind[1462]: Removed session 21. May 15 01:10:25.771425 sshd[4194]: Accepted publickey for core from 172.24.4.1 port 36064 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:25.774762 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:25.791150 systemd-logind[1462]: New session 22 of user core. May 15 01:10:25.797649 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 01:10:26.688941 sshd[4199]: Connection closed by 172.24.4.1 port 36064 May 15 01:10:26.690449 sshd-session[4194]: pam_unix(sshd:session): session closed for user core May 15 01:10:26.699734 systemd[1]: sshd@19-172.24.4.204:22-172.24.4.1:36064.service: Deactivated successfully. May 15 01:10:26.707704 systemd[1]: session-22.scope: Deactivated successfully. May 15 01:10:26.710914 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. May 15 01:10:26.713858 systemd-logind[1462]: Removed session 22. May 15 01:10:31.725603 systemd[1]: Started sshd@20-172.24.4.204:22-172.24.4.1:36068.service - OpenSSH per-connection server daemon (172.24.4.1:36068). May 15 01:10:32.890664 sshd[4215]: Accepted publickey for core from 172.24.4.1 port 36068 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:32.894929 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:32.911833 systemd-logind[1462]: New session 23 of user core. May 15 01:10:32.921704 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 01:10:33.771438 sshd[4217]: Connection closed by 172.24.4.1 port 36068 May 15 01:10:33.771416 sshd-session[4215]: pam_unix(sshd:session): session closed for user core May 15 01:10:33.782519 systemd[1]: sshd@20-172.24.4.204:22-172.24.4.1:36068.service: Deactivated successfully. May 15 01:10:33.783940 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. May 15 01:10:33.792682 systemd[1]: session-23.scope: Deactivated successfully. May 15 01:10:33.799639 systemd-logind[1462]: Removed session 23. May 15 01:10:38.799850 systemd[1]: Started sshd@21-172.24.4.204:22-172.24.4.1:43932.service - OpenSSH per-connection server daemon (172.24.4.1:43932). May 15 01:10:40.126924 sshd[4228]: Accepted publickey for core from 172.24.4.1 port 43932 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:40.130543 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:40.143895 systemd-logind[1462]: New session 24 of user core. May 15 01:10:40.152795 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 01:10:40.738333 sshd[4230]: Connection closed by 172.24.4.1 port 43932 May 15 01:10:40.737197 sshd-session[4228]: pam_unix(sshd:session): session closed for user core May 15 01:10:40.745247 systemd[1]: sshd@21-172.24.4.204:22-172.24.4.1:43932.service: Deactivated successfully. May 15 01:10:40.753048 systemd[1]: session-24.scope: Deactivated successfully. May 15 01:10:40.757032 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. May 15 01:10:40.760638 systemd-logind[1462]: Removed session 24. May 15 01:10:45.758924 systemd[1]: Started sshd@22-172.24.4.204:22-172.24.4.1:54202.service - OpenSSH per-connection server daemon (172.24.4.1:54202). May 15 01:10:46.886871 sshd[4242]: Accepted publickey for core from 172.24.4.1 port 54202 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:46.888584 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:46.900795 systemd-logind[1462]: New session 25 of user core. May 15 01:10:46.910715 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 01:10:47.779067 sshd[4244]: Connection closed by 172.24.4.1 port 54202 May 15 01:10:47.776350 sshd-session[4242]: pam_unix(sshd:session): session closed for user core May 15 01:10:47.794038 systemd[1]: sshd@22-172.24.4.204:22-172.24.4.1:54202.service: Deactivated successfully. May 15 01:10:47.800933 systemd[1]: session-25.scope: Deactivated successfully. May 15 01:10:47.806628 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. May 15 01:10:47.819819 systemd[1]: Started sshd@23-172.24.4.204:22-172.24.4.1:54206.service - OpenSSH per-connection server daemon (172.24.4.1:54206). May 15 01:10:47.829467 systemd-logind[1462]: Removed session 25. May 15 01:10:49.011455 sshd[4255]: Accepted publickey for core from 172.24.4.1 port 54206 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:49.015659 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:49.029890 systemd-logind[1462]: New session 26 of user core. May 15 01:10:49.039618 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 01:10:51.117845 containerd[1482]: time="2025-05-15T01:10:51.117541097Z" level=info msg="StopContainer for \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" with timeout 30 (s)" May 15 01:10:51.120341 containerd[1482]: time="2025-05-15T01:10:51.119874724Z" level=info msg="Stop container \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" with signal terminated" May 15 01:10:51.143940 systemd[1]: cri-containerd-8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3.scope: Deactivated successfully. May 15 01:10:51.145066 systemd[1]: cri-containerd-8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3.scope: Consumed 1.153s CPU time, 27M memory peak, 4K written to disk. May 15 01:10:51.155242 containerd[1482]: time="2025-05-15T01:10:51.155142853Z" level=info msg="received exit event container_id:\"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" id:\"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" pid:3310 exited_at:{seconds:1747271451 nanos:148352441}" May 15 01:10:51.156384 containerd[1482]: time="2025-05-15T01:10:51.156332480Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" id:\"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" pid:3310 exited_at:{seconds:1747271451 nanos:148352441}" May 15 01:10:51.168540 containerd[1482]: time="2025-05-15T01:10:51.168380317Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 01:10:51.175150 containerd[1482]: time="2025-05-15T01:10:51.175079145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" id:\"9e6b00f94fc485a35e85fc2aadf6a39207497e6e60d4a1d32831197afdd7cafe\" pid:4285 exited_at:{seconds:1747271451 nanos:174479016}" May 15 01:10:51.181296 containerd[1482]: time="2025-05-15T01:10:51.181167065Z" level=info msg="StopContainer for \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" with timeout 2 (s)" May 15 01:10:51.182836 containerd[1482]: time="2025-05-15T01:10:51.182626063Z" level=info msg="Stop container \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" with signal terminated" May 15 01:10:51.208531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3-rootfs.mount: Deactivated successfully. May 15 01:10:51.213575 systemd-networkd[1381]: lxc_health: Link DOWN May 15 01:10:51.213585 systemd-networkd[1381]: lxc_health: Lost carrier May 15 01:10:51.228431 systemd[1]: cri-containerd-fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d.scope: Deactivated successfully. May 15 01:10:51.233228 containerd[1482]: time="2025-05-15T01:10:51.231044491Z" level=info msg="received exit event container_id:\"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" id:\"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" pid:3344 exited_at:{seconds:1747271451 nanos:228149057}" May 15 01:10:51.233228 containerd[1482]: time="2025-05-15T01:10:51.231461001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" id:\"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" pid:3344 exited_at:{seconds:1747271451 nanos:228149057}" May 15 01:10:51.228744 systemd[1]: cri-containerd-fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d.scope: Consumed 10.370s CPU time, 124.4M memory peak, 152K read from disk, 13.3M written to disk. May 15 01:10:51.246831 containerd[1482]: time="2025-05-15T01:10:51.246061510Z" level=info msg="StopContainer for \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" returns successfully" May 15 01:10:51.247865 containerd[1482]: time="2025-05-15T01:10:51.247830827Z" level=info msg="StopPodSandbox for \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\"" May 15 01:10:51.247944 containerd[1482]: time="2025-05-15T01:10:51.247927550Z" level=info msg="Container to stop \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:10:51.263351 systemd[1]: cri-containerd-db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba.scope: Deactivated successfully. May 15 01:10:51.271178 containerd[1482]: time="2025-05-15T01:10:51.270978972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" id:\"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" pid:2877 exit_status:137 exited_at:{seconds:1747271451 nanos:270541413}" May 15 01:10:51.281537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d-rootfs.mount: Deactivated successfully. May 15 01:10:51.309195 containerd[1482]: time="2025-05-15T01:10:51.308912688Z" level=info msg="StopContainer for \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" returns successfully" May 15 01:10:51.310043 containerd[1482]: time="2025-05-15T01:10:51.309647623Z" level=info msg="StopPodSandbox for \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\"" May 15 01:10:51.310043 containerd[1482]: time="2025-05-15T01:10:51.309755437Z" level=info msg="Container to stop \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:10:51.310043 containerd[1482]: time="2025-05-15T01:10:51.309775515Z" level=info msg="Container to stop \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:10:51.310043 containerd[1482]: time="2025-05-15T01:10:51.309800232Z" level=info msg="Container to stop \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:10:51.311306 containerd[1482]: time="2025-05-15T01:10:51.311028042Z" level=info msg="Container to stop \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:10:51.311306 containerd[1482]: time="2025-05-15T01:10:51.311054242Z" level=info msg="Container to stop \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:10:51.320444 systemd[1]: cri-containerd-d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a.scope: Deactivated successfully. May 15 01:10:51.336623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba-rootfs.mount: Deactivated successfully. May 15 01:10:51.340763 containerd[1482]: time="2025-05-15T01:10:51.340334591Z" level=info msg="shim disconnected" id=db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba namespace=k8s.io May 15 01:10:51.340763 containerd[1482]: time="2025-05-15T01:10:51.340378725Z" level=warning msg="cleaning up after shim disconnected" id=db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba namespace=k8s.io May 15 01:10:51.340763 containerd[1482]: time="2025-05-15T01:10:51.340396448Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 01:10:51.366626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a-rootfs.mount: Deactivated successfully. May 15 01:10:51.371812 containerd[1482]: time="2025-05-15T01:10:51.371059721Z" level=info msg="shim disconnected" id=d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a namespace=k8s.io May 15 01:10:51.371969 containerd[1482]: time="2025-05-15T01:10:51.371949980Z" level=warning msg="cleaning up after shim disconnected" id=d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a namespace=k8s.io May 15 01:10:51.372293 containerd[1482]: time="2025-05-15T01:10:51.372069366Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 01:10:51.372407 containerd[1482]: time="2025-05-15T01:10:51.371104547Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:10:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 01:10:51.375445 containerd[1482]: time="2025-05-15T01:10:51.375349520Z" level=error msg="Failed to handle event container_id:\"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" id:\"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" pid:2877 exit_status:137 exited_at:{seconds:1747271451 nanos:270541413} for db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" May 15 01:10:51.375575 containerd[1482]: time="2025-05-15T01:10:51.375471711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" id:\"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" pid:2897 exit_status:137 exited_at:{seconds:1747271451 nanos:324596884}" May 15 01:10:51.377462 containerd[1482]: time="2025-05-15T01:10:51.377422633Z" level=info msg="received exit event sandbox_id:\"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" exit_status:137 exited_at:{seconds:1747271451 nanos:270541413}" May 15 01:10:51.378232 containerd[1482]: time="2025-05-15T01:10:51.378207692Z" level=info msg="TearDown network for sandbox \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" successfully" May 15 01:10:51.378357 containerd[1482]: time="2025-05-15T01:10:51.378339923Z" level=info msg="StopPodSandbox for \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" returns successfully" May 15 01:10:51.378998 containerd[1482]: time="2025-05-15T01:10:51.378945502Z" level=info msg="received exit event sandbox_id:\"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" exit_status:137 exited_at:{seconds:1747271451 nanos:324596884}" May 15 01:10:51.380185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a-shm.mount: Deactivated successfully. May 15 01:10:51.381807 containerd[1482]: time="2025-05-15T01:10:51.381731046Z" level=info msg="TearDown network for sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" successfully" May 15 01:10:51.381807 containerd[1482]: time="2025-05-15T01:10:51.381757006Z" level=info msg="StopPodSandbox for \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" returns successfully" May 15 01:10:51.519097 kubelet[2713]: I0515 01:10:51.519013 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vx74\" (UniqueName: \"kubernetes.io/projected/2953288d-cd7c-45a0-b6af-08868a2e32ea-kube-api-access-2vx74\") pod \"2953288d-cd7c-45a0-b6af-08868a2e32ea\" (UID: \"2953288d-cd7c-45a0-b6af-08868a2e32ea\") " May 15 01:10:51.521419 kubelet[2713]: I0515 01:10:51.520205 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-etc-cni-netd\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.521419 kubelet[2713]: I0515 01:10:51.520358 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-config-path\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.521419 kubelet[2713]: I0515 01:10:51.520516 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8wz6\" (UniqueName: \"kubernetes.io/projected/e89a2e4d-62b4-4294-ace6-87ba5bd89634-kube-api-access-f8wz6\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.521419 kubelet[2713]: I0515 01:10:51.520635 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-host-proc-sys-kernel\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.521419 kubelet[2713]: I0515 01:10:51.520741 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2953288d-cd7c-45a0-b6af-08868a2e32ea-cilium-config-path\") pod \"2953288d-cd7c-45a0-b6af-08868a2e32ea\" (UID: \"2953288d-cd7c-45a0-b6af-08868a2e32ea\") " May 15 01:10:51.524404 kubelet[2713]: I0515 01:10:51.521996 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-host-proc-sys-net\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.524404 kubelet[2713]: I0515 01:10:51.522111 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e89a2e4d-62b4-4294-ace6-87ba5bd89634-hubble-tls\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.524404 kubelet[2713]: I0515 01:10:51.522188 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-run\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.524404 kubelet[2713]: I0515 01:10:51.522266 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-lib-modules\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.524404 kubelet[2713]: I0515 01:10:51.522389 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cni-path\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.524404 kubelet[2713]: I0515 01:10:51.522477 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-cgroup\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.525269 kubelet[2713]: I0515 01:10:51.522756 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.525269 kubelet[2713]: I0515 01:10:51.522917 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.525768 kubelet[2713]: I0515 01:10:51.525678 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-xtables-lock\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.526021 kubelet[2713]: I0515 01:10:51.525959 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-bpf-maps\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.526231 kubelet[2713]: I0515 01:10:51.526197 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-hostproc\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.526732 kubelet[2713]: I0515 01:10:51.526645 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e89a2e4d-62b4-4294-ace6-87ba5bd89634-clustermesh-secrets\") pod \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\" (UID: \"e89a2e4d-62b4-4294-ace6-87ba5bd89634\") " May 15 01:10:51.531981 kubelet[2713]: I0515 01:10:51.531896 2713 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-host-proc-sys-net\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.531981 kubelet[2713]: I0515 01:10:51.531973 2713 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-etc-cni-netd\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.531981 kubelet[2713]: I0515 01:10:51.531489 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.532422 kubelet[2713]: I0515 01:10:51.531564 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.532422 kubelet[2713]: I0515 01:10:51.531596 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cni-path" (OuterVolumeSpecName: "cni-path") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.532422 kubelet[2713]: I0515 01:10:51.531624 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.532422 kubelet[2713]: I0515 01:10:51.531651 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.532422 kubelet[2713]: I0515 01:10:51.531681 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.533797 kubelet[2713]: I0515 01:10:51.531709 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-hostproc" (OuterVolumeSpecName: "hostproc") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.533797 kubelet[2713]: I0515 01:10:51.532259 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2953288d-cd7c-45a0-b6af-08868a2e32ea-kube-api-access-2vx74" (OuterVolumeSpecName: "kube-api-access-2vx74") pod "2953288d-cd7c-45a0-b6af-08868a2e32ea" (UID: "2953288d-cd7c-45a0-b6af-08868a2e32ea"). InnerVolumeSpecName "kube-api-access-2vx74". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 01:10:51.535381 kubelet[2713]: I0515 01:10:51.534566 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 01:10:51.548721 kubelet[2713]: I0515 01:10:51.543218 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 01:10:51.552510 kubelet[2713]: I0515 01:10:51.551720 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89a2e4d-62b4-4294-ace6-87ba5bd89634-kube-api-access-f8wz6" (OuterVolumeSpecName: "kube-api-access-f8wz6") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "kube-api-access-f8wz6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 01:10:51.557118 kubelet[2713]: I0515 01:10:51.557048 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89a2e4d-62b4-4294-ace6-87ba5bd89634-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 01:10:51.562727 kubelet[2713]: I0515 01:10:51.562608 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2953288d-cd7c-45a0-b6af-08868a2e32ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2953288d-cd7c-45a0-b6af-08868a2e32ea" (UID: "2953288d-cd7c-45a0-b6af-08868a2e32ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 01:10:51.570433 kubelet[2713]: I0515 01:10:51.570337 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89a2e4d-62b4-4294-ace6-87ba5bd89634-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e89a2e4d-62b4-4294-ace6-87ba5bd89634" (UID: "e89a2e4d-62b4-4294-ace6-87ba5bd89634"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 01:10:51.624125 systemd[1]: Removed slice kubepods-besteffort-pod2953288d_cd7c_45a0_b6af_08868a2e32ea.slice - libcontainer container kubepods-besteffort-pod2953288d_cd7c_45a0_b6af_08868a2e32ea.slice. May 15 01:10:51.624248 systemd[1]: kubepods-besteffort-pod2953288d_cd7c_45a0_b6af_08868a2e32ea.slice: Consumed 1.193s CPU time, 27.2M memory peak, 4K written to disk. May 15 01:10:51.629018 systemd[1]: Removed slice kubepods-burstable-pode89a2e4d_62b4_4294_ace6_87ba5bd89634.slice - libcontainer container kubepods-burstable-pode89a2e4d_62b4_4294_ace6_87ba5bd89634.slice. May 15 01:10:51.629319 systemd[1]: kubepods-burstable-pode89a2e4d_62b4_4294_ace6_87ba5bd89634.slice: Consumed 10.507s CPU time, 124.8M memory peak, 152K read from disk, 13.3M written to disk. May 15 01:10:51.632897 kubelet[2713]: I0515 01:10:51.632853 2713 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-lib-modules\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.632897 kubelet[2713]: I0515 01:10:51.632887 2713 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-run\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.632897 kubelet[2713]: I0515 01:10:51.632899 2713 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-xtables-lock\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.632897 kubelet[2713]: I0515 01:10:51.632913 2713 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cni-path\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.633488 kubelet[2713]: I0515 01:10:51.632925 2713 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-cgroup\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.633488 kubelet[2713]: I0515 01:10:51.632937 2713 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-bpf-maps\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.633488 kubelet[2713]: I0515 01:10:51.632948 2713 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-hostproc\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.633488 kubelet[2713]: I0515 01:10:51.632959 2713 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e89a2e4d-62b4-4294-ace6-87ba5bd89634-clustermesh-secrets\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.633488 kubelet[2713]: I0515 01:10:51.632972 2713 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2vx74\" (UniqueName: \"kubernetes.io/projected/2953288d-cd7c-45a0-b6af-08868a2e32ea-kube-api-access-2vx74\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.633488 kubelet[2713]: I0515 01:10:51.632983 2713 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f8wz6\" (UniqueName: \"kubernetes.io/projected/e89a2e4d-62b4-4294-ace6-87ba5bd89634-kube-api-access-f8wz6\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.633488 kubelet[2713]: I0515 01:10:51.632994 2713 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e89a2e4d-62b4-4294-ace6-87ba5bd89634-cilium-config-path\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.634089 kubelet[2713]: I0515 01:10:51.633005 2713 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e89a2e4d-62b4-4294-ace6-87ba5bd89634-host-proc-sys-kernel\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.634089 kubelet[2713]: I0515 01:10:51.633016 2713 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2953288d-cd7c-45a0-b6af-08868a2e32ea-cilium-config-path\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:51.634089 kubelet[2713]: I0515 01:10:51.633027 2713 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e89a2e4d-62b4-4294-ace6-87ba5bd89634-hubble-tls\") on node \"ci-4284-0-0-n-df1b790171.novalocal\" DevicePath \"\"" May 15 01:10:52.005749 kubelet[2713]: I0515 01:10:52.005632 2713 scope.go:117] "RemoveContainer" containerID="fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d" May 15 01:10:52.020070 containerd[1482]: time="2025-05-15T01:10:52.017245130Z" level=info msg="RemoveContainer for \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\"" May 15 01:10:52.050673 containerd[1482]: time="2025-05-15T01:10:52.050552498Z" level=info msg="RemoveContainer for \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" returns successfully" May 15 01:10:52.051201 kubelet[2713]: I0515 01:10:52.051123 2713 scope.go:117] "RemoveContainer" containerID="386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7" May 15 01:10:52.058255 containerd[1482]: time="2025-05-15T01:10:52.058109277Z" level=info msg="RemoveContainer for \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\"" May 15 01:10:52.075296 containerd[1482]: time="2025-05-15T01:10:52.074632389Z" level=info msg="RemoveContainer for \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\" returns successfully" May 15 01:10:52.076624 kubelet[2713]: I0515 01:10:52.076556 2713 scope.go:117] "RemoveContainer" containerID="13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1" May 15 01:10:52.087820 containerd[1482]: time="2025-05-15T01:10:52.087761911Z" level=info msg="RemoveContainer for \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\"" May 15 01:10:52.096655 containerd[1482]: time="2025-05-15T01:10:52.096610872Z" level=info msg="RemoveContainer for \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\" returns successfully" May 15 01:10:52.097092 kubelet[2713]: I0515 01:10:52.097054 2713 scope.go:117] "RemoveContainer" containerID="fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190" May 15 01:10:52.100103 containerd[1482]: time="2025-05-15T01:10:52.100062832Z" level=info msg="RemoveContainer for \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\"" May 15 01:10:52.108470 containerd[1482]: time="2025-05-15T01:10:52.108415901Z" level=info msg="RemoveContainer for \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\" returns successfully" May 15 01:10:52.108650 kubelet[2713]: I0515 01:10:52.108623 2713 scope.go:117] "RemoveContainer" containerID="ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8" May 15 01:10:52.111016 containerd[1482]: time="2025-05-15T01:10:52.110974866Z" level=info msg="RemoveContainer for \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\"" May 15 01:10:52.116446 containerd[1482]: time="2025-05-15T01:10:52.116299741Z" level=info msg="RemoveContainer for \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\" returns successfully" May 15 01:10:52.116550 kubelet[2713]: I0515 01:10:52.116519 2713 scope.go:117] "RemoveContainer" containerID="fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d" May 15 01:10:52.116896 containerd[1482]: time="2025-05-15T01:10:52.116794959Z" level=error msg="ContainerStatus for \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\": not found" May 15 01:10:52.117175 kubelet[2713]: E0515 01:10:52.117129 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\": not found" containerID="fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d" May 15 01:10:52.117423 kubelet[2713]: I0515 01:10:52.117220 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d"} err="failed to get container status \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d\": not found" May 15 01:10:52.117423 kubelet[2713]: I0515 01:10:52.117412 2713 scope.go:117] "RemoveContainer" containerID="386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7" May 15 01:10:52.117695 containerd[1482]: time="2025-05-15T01:10:52.117626899Z" level=error msg="ContainerStatus for \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\": not found" May 15 01:10:52.117821 kubelet[2713]: E0515 01:10:52.117791 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\": not found" containerID="386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7" May 15 01:10:52.117972 kubelet[2713]: I0515 01:10:52.117853 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7"} err="failed to get container status \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7\": not found" May 15 01:10:52.117972 kubelet[2713]: I0515 01:10:52.117886 2713 scope.go:117] "RemoveContainer" containerID="13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1" May 15 01:10:52.118244 kubelet[2713]: E0515 01:10:52.118221 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\": not found" containerID="13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1" May 15 01:10:52.118322 containerd[1482]: time="2025-05-15T01:10:52.118109193Z" level=error msg="ContainerStatus for \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\": not found" May 15 01:10:52.118597 kubelet[2713]: I0515 01:10:52.118243 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1"} err="failed to get container status \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1\": not found" May 15 01:10:52.118597 kubelet[2713]: I0515 01:10:52.118259 2713 scope.go:117] "RemoveContainer" containerID="fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190" May 15 01:10:52.118880 containerd[1482]: time="2025-05-15T01:10:52.118816446Z" level=error msg="ContainerStatus for \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\": not found" May 15 01:10:52.119031 kubelet[2713]: E0515 01:10:52.118945 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\": not found" containerID="fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190" May 15 01:10:52.119031 kubelet[2713]: I0515 01:10:52.118967 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190"} err="failed to get container status \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190\": not found" May 15 01:10:52.119031 kubelet[2713]: I0515 01:10:52.118982 2713 scope.go:117] "RemoveContainer" containerID="ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8" May 15 01:10:52.120000 kubelet[2713]: E0515 01:10:52.119248 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\": not found" containerID="ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8" May 15 01:10:52.120000 kubelet[2713]: I0515 01:10:52.119363 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8"} err="failed to get container status \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\": not found" May 15 01:10:52.120000 kubelet[2713]: I0515 01:10:52.119388 2713 scope.go:117] "RemoveContainer" containerID="8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3" May 15 01:10:52.120102 containerd[1482]: time="2025-05-15T01:10:52.119136553Z" level=error msg="ContainerStatus for \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8\": not found" May 15 01:10:52.120979 containerd[1482]: time="2025-05-15T01:10:52.120956366Z" level=info msg="RemoveContainer for \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\"" May 15 01:10:52.127323 containerd[1482]: time="2025-05-15T01:10:52.126567422Z" level=info msg="RemoveContainer for \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" returns successfully" May 15 01:10:52.127323 containerd[1482]: time="2025-05-15T01:10:52.127045249Z" level=error msg="ContainerStatus for \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\": not found" May 15 01:10:52.127540 kubelet[2713]: I0515 01:10:52.126817 2713 scope.go:117] "RemoveContainer" containerID="8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3" May 15 01:10:52.127540 kubelet[2713]: E0515 01:10:52.127156 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\": not found" containerID="8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3" May 15 01:10:52.127540 kubelet[2713]: I0515 01:10:52.127181 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3"} err="failed to get container status \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3\": not found" May 15 01:10:52.205903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba-shm.mount: Deactivated successfully. May 15 01:10:52.206129 systemd[1]: var-lib-kubelet-pods-e89a2e4d\x2d62b4\x2d4294\x2dace6\x2d87ba5bd89634-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df8wz6.mount: Deactivated successfully. May 15 01:10:52.206367 systemd[1]: var-lib-kubelet-pods-2953288d\x2dcd7c\x2d45a0\x2db6af\x2d08868a2e32ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2vx74.mount: Deactivated successfully. May 15 01:10:52.206545 systemd[1]: var-lib-kubelet-pods-e89a2e4d\x2d62b4\x2d4294\x2dace6\x2d87ba5bd89634-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 01:10:52.206712 systemd[1]: var-lib-kubelet-pods-e89a2e4d\x2d62b4\x2d4294\x2dace6\x2d87ba5bd89634-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 01:10:52.684540 containerd[1482]: time="2025-05-15T01:10:52.684207382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" id:\"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" pid:2877 exit_status:137 exited_at:{seconds:1747271451 nanos:270541413}" May 15 01:10:52.880989 kubelet[2713]: E0515 01:10:52.880854 2713 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 01:10:53.198491 sshd[4259]: Connection closed by 172.24.4.1 port 54206 May 15 01:10:53.200138 sshd-session[4255]: pam_unix(sshd:session): session closed for user core May 15 01:10:53.221692 systemd[1]: sshd@23-172.24.4.204:22-172.24.4.1:54206.service: Deactivated successfully. May 15 01:10:53.231796 systemd[1]: session-26.scope: Deactivated successfully. May 15 01:10:53.232810 systemd[1]: session-26.scope: Consumed 1.036s CPU time, 23.9M memory peak. May 15 01:10:53.235344 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. May 15 01:10:53.247912 systemd[1]: Started sshd@24-172.24.4.204:22-172.24.4.1:54220.service - OpenSSH per-connection server daemon (172.24.4.1:54220). May 15 01:10:53.255068 systemd-logind[1462]: Removed session 26. May 15 01:10:53.624332 kubelet[2713]: I0515 01:10:53.623535 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2953288d-cd7c-45a0-b6af-08868a2e32ea" path="/var/lib/kubelet/pods/2953288d-cd7c-45a0-b6af-08868a2e32ea/volumes" May 15 01:10:53.625228 kubelet[2713]: I0515 01:10:53.625180 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89a2e4d-62b4-4294-ace6-87ba5bd89634" path="/var/lib/kubelet/pods/e89a2e4d-62b4-4294-ace6-87ba5bd89634/volumes" May 15 01:10:53.870081 kubelet[2713]: I0515 01:10:53.869953 2713 setters.go:602] "Node became not ready" node="ci-4284-0-0-n-df1b790171.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T01:10:53Z","lastTransitionTime":"2025-05-15T01:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 01:10:54.354067 sshd[4409]: Accepted publickey for core from 172.24.4.1 port 54220 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:54.357511 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:54.370342 systemd-logind[1462]: New session 27 of user core. May 15 01:10:54.384620 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 01:10:55.599576 kubelet[2713]: I0515 01:10:55.598156 2713 memory_manager.go:355] "RemoveStaleState removing state" podUID="2953288d-cd7c-45a0-b6af-08868a2e32ea" containerName="cilium-operator" May 15 01:10:55.599576 kubelet[2713]: I0515 01:10:55.598199 2713 memory_manager.go:355] "RemoveStaleState removing state" podUID="e89a2e4d-62b4-4294-ace6-87ba5bd89634" containerName="cilium-agent" May 15 01:10:55.611458 systemd[1]: Created slice kubepods-burstable-pod11b1b2c6_8806_4c58_9785_415cb65cddfe.slice - libcontainer container kubepods-burstable-pod11b1b2c6_8806_4c58_9785_415cb65cddfe.slice. May 15 01:10:55.705139 sshd[4412]: Connection closed by 172.24.4.1 port 54220 May 15 01:10:55.705029 sshd-session[4409]: pam_unix(sshd:session): session closed for user core May 15 01:10:55.725391 systemd[1]: sshd@24-172.24.4.204:22-172.24.4.1:54220.service: Deactivated successfully. May 15 01:10:55.728977 systemd[1]: session-27.scope: Deactivated successfully. May 15 01:10:55.732621 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. May 15 01:10:55.737091 systemd[1]: Started sshd@25-172.24.4.204:22-172.24.4.1:54838.service - OpenSSH per-connection server daemon (172.24.4.1:54838). May 15 01:10:55.741628 systemd-logind[1462]: Removed session 27. May 15 01:10:55.758847 kubelet[2713]: I0515 01:10:55.758784 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11b1b2c6-8806-4c58-9785-415cb65cddfe-cilium-config-path\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.758993 kubelet[2713]: I0515 01:10:55.758883 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-etc-cni-netd\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.758993 kubelet[2713]: I0515 01:10:55.758964 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-hostproc\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.759163 kubelet[2713]: I0515 01:10:55.759013 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-cilium-cgroup\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.759163 kubelet[2713]: I0515 01:10:55.759057 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-cni-path\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.759163 kubelet[2713]: I0515 01:10:55.759116 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-xtables-lock\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.759163 kubelet[2713]: I0515 01:10:55.759157 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-host-proc-sys-net\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.761032 kubelet[2713]: I0515 01:10:55.760885 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-bpf-maps\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.761572 kubelet[2713]: I0515 01:10:55.761508 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-lib-modules\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.761865 kubelet[2713]: I0515 01:10:55.761736 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11b1b2c6-8806-4c58-9785-415cb65cddfe-cilium-ipsec-secrets\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.762536 kubelet[2713]: I0515 01:10:55.762111 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-host-proc-sys-kernel\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.762536 kubelet[2713]: I0515 01:10:55.762352 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11b1b2c6-8806-4c58-9785-415cb65cddfe-hubble-tls\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.762929 kubelet[2713]: I0515 01:10:55.762465 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgqvd\" (UniqueName: \"kubernetes.io/projected/11b1b2c6-8806-4c58-9785-415cb65cddfe-kube-api-access-zgqvd\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.762929 kubelet[2713]: I0515 01:10:55.762880 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11b1b2c6-8806-4c58-9785-415cb65cddfe-cilium-run\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:55.763335 kubelet[2713]: I0515 01:10:55.763156 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11b1b2c6-8806-4c58-9785-415cb65cddfe-clustermesh-secrets\") pod \"cilium-94l5k\" (UID: \"11b1b2c6-8806-4c58-9785-415cb65cddfe\") " pod="kube-system/cilium-94l5k" May 15 01:10:56.227247 containerd[1482]: time="2025-05-15T01:10:56.225551758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-94l5k,Uid:11b1b2c6-8806-4c58-9785-415cb65cddfe,Namespace:kube-system,Attempt:0,}" May 15 01:10:56.279623 containerd[1482]: time="2025-05-15T01:10:56.279501494Z" level=info msg="connecting to shim cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483" address="unix:///run/containerd/s/e5efdb03c7c63e548a4c7bb031e89fd804ec78dad61302e842813867c471218f" namespace=k8s.io protocol=ttrpc version=3 May 15 01:10:56.352514 systemd[1]: Started cri-containerd-cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483.scope - libcontainer container cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483. May 15 01:10:56.387167 containerd[1482]: time="2025-05-15T01:10:56.387105664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-94l5k,Uid:11b1b2c6-8806-4c58-9785-415cb65cddfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\"" May 15 01:10:56.392800 containerd[1482]: time="2025-05-15T01:10:56.392740512Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 01:10:56.403043 containerd[1482]: time="2025-05-15T01:10:56.403006663Z" level=info msg="Container 06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c: CDI devices from CRI Config.CDIDevices: []" May 15 01:10:56.414328 containerd[1482]: time="2025-05-15T01:10:56.414172973Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c\"" May 15 01:10:56.416636 containerd[1482]: time="2025-05-15T01:10:56.415215812Z" level=info msg="StartContainer for \"06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c\"" May 15 01:10:56.416636 containerd[1482]: time="2025-05-15T01:10:56.416166728Z" level=info msg="connecting to shim 06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c" address="unix:///run/containerd/s/e5efdb03c7c63e548a4c7bb031e89fd804ec78dad61302e842813867c471218f" protocol=ttrpc version=3 May 15 01:10:56.440440 systemd[1]: Started cri-containerd-06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c.scope - libcontainer container 06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c. May 15 01:10:56.482115 containerd[1482]: time="2025-05-15T01:10:56.481990997Z" level=info msg="StartContainer for \"06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c\" returns successfully" May 15 01:10:56.500256 systemd[1]: cri-containerd-06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c.scope: Deactivated successfully. May 15 01:10:56.502133 containerd[1482]: time="2025-05-15T01:10:56.502099535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c\" id:\"06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c\" pid:4488 exited_at:{seconds:1747271456 nanos:501091571}" May 15 01:10:56.502486 containerd[1482]: time="2025-05-15T01:10:56.502457514Z" level=info msg="received exit event container_id:\"06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c\" id:\"06237b23e09e5a1220621617b01fcb8f7e24879be5cd06d4d2edd751f3f5243c\" pid:4488 exited_at:{seconds:1747271456 nanos:501091571}" May 15 01:10:57.054814 containerd[1482]: time="2025-05-15T01:10:57.053716534Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 01:10:57.084762 containerd[1482]: time="2025-05-15T01:10:57.084665981Z" level=info msg="Container 4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7: CDI devices from CRI Config.CDIDevices: []" May 15 01:10:57.112125 sshd[4425]: Accepted publickey for core from 172.24.4.1 port 54838 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:57.116483 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:57.127503 containerd[1482]: time="2025-05-15T01:10:57.127332857Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7\"" May 15 01:10:57.129036 containerd[1482]: time="2025-05-15T01:10:57.128986297Z" level=info msg="StartContainer for \"4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7\"" May 15 01:10:57.136060 containerd[1482]: time="2025-05-15T01:10:57.135980988Z" level=info msg="connecting to shim 4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7" address="unix:///run/containerd/s/e5efdb03c7c63e548a4c7bb031e89fd804ec78dad61302e842813867c471218f" protocol=ttrpc version=3 May 15 01:10:57.138650 systemd-logind[1462]: New session 28 of user core. May 15 01:10:57.144450 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 01:10:57.167484 systemd[1]: Started cri-containerd-4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7.scope - libcontainer container 4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7. May 15 01:10:57.204936 containerd[1482]: time="2025-05-15T01:10:57.204197338Z" level=info msg="StartContainer for \"4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7\" returns successfully" May 15 01:10:57.215789 systemd[1]: cri-containerd-4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7.scope: Deactivated successfully. May 15 01:10:57.216157 containerd[1482]: time="2025-05-15T01:10:57.216008394Z" level=info msg="received exit event container_id:\"4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7\" id:\"4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7\" pid:4532 exited_at:{seconds:1747271457 nanos:215561676}" May 15 01:10:57.216251 containerd[1482]: time="2025-05-15T01:10:57.216219204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7\" id:\"4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7\" pid:4532 exited_at:{seconds:1747271457 nanos:215561676}" May 15 01:10:57.239255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4abbeb1666f879f985d5d208e6ad6346484567b8312e346e3c0f20a901f19ce7-rootfs.mount: Deactivated successfully. May 15 01:10:57.882350 kubelet[2713]: E0515 01:10:57.882241 2713 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 01:10:57.920782 sshd[4525]: Connection closed by 172.24.4.1 port 54838 May 15 01:10:57.923934 sshd-session[4425]: pam_unix(sshd:session): session closed for user core May 15 01:10:57.941652 systemd[1]: sshd@25-172.24.4.204:22-172.24.4.1:54838.service: Deactivated successfully. May 15 01:10:57.949444 systemd[1]: session-28.scope: Deactivated successfully. May 15 01:10:57.955447 systemd-logind[1462]: Session 28 logged out. Waiting for processes to exit. May 15 01:10:57.961936 systemd[1]: Started sshd@26-172.24.4.204:22-172.24.4.1:54846.service - OpenSSH per-connection server daemon (172.24.4.1:54846). May 15 01:10:57.965420 systemd-logind[1462]: Removed session 28. May 15 01:10:58.063784 containerd[1482]: time="2025-05-15T01:10:58.062364083Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 01:10:58.107349 containerd[1482]: time="2025-05-15T01:10:58.099682032Z" level=info msg="Container 19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7: CDI devices from CRI Config.CDIDevices: []" May 15 01:10:58.131511 containerd[1482]: time="2025-05-15T01:10:58.129342559Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7\"" May 15 01:10:58.131511 containerd[1482]: time="2025-05-15T01:10:58.130071713Z" level=info msg="StartContainer for \"19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7\"" May 15 01:10:58.132147 containerd[1482]: time="2025-05-15T01:10:58.132076501Z" level=info msg="connecting to shim 19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7" address="unix:///run/containerd/s/e5efdb03c7c63e548a4c7bb031e89fd804ec78dad61302e842813867c471218f" protocol=ttrpc version=3 May 15 01:10:58.172467 systemd[1]: Started cri-containerd-19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7.scope - libcontainer container 19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7. May 15 01:10:58.249966 systemd[1]: cri-containerd-19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7.scope: Deactivated successfully. May 15 01:10:58.252616 containerd[1482]: time="2025-05-15T01:10:58.252510733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7\" id:\"19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7\" pid:4583 exited_at:{seconds:1747271458 nanos:252096096}" May 15 01:10:58.253927 containerd[1482]: time="2025-05-15T01:10:58.253785073Z" level=info msg="received exit event container_id:\"19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7\" id:\"19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7\" pid:4583 exited_at:{seconds:1747271458 nanos:252096096}" May 15 01:10:58.264757 containerd[1482]: time="2025-05-15T01:10:58.264712615Z" level=info msg="StartContainer for \"19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7\" returns successfully" May 15 01:10:58.286036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19fcc65a368f4f5564b750b62b049226c080aedd3c57beb859b11dca809dd6b7-rootfs.mount: Deactivated successfully. May 15 01:10:59.066456 containerd[1482]: time="2025-05-15T01:10:59.066388615Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 01:10:59.083053 containerd[1482]: time="2025-05-15T01:10:59.082907419Z" level=info msg="Container b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107: CDI devices from CRI Config.CDIDevices: []" May 15 01:10:59.099811 containerd[1482]: time="2025-05-15T01:10:59.099752002Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107\"" May 15 01:10:59.100527 containerd[1482]: time="2025-05-15T01:10:59.100463544Z" level=info msg="StartContainer for \"b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107\"" May 15 01:10:59.104425 containerd[1482]: time="2025-05-15T01:10:59.104371927Z" level=info msg="connecting to shim b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107" address="unix:///run/containerd/s/e5efdb03c7c63e548a4c7bb031e89fd804ec78dad61302e842813867c471218f" protocol=ttrpc version=3 May 15 01:10:59.146446 systemd[1]: Started cri-containerd-b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107.scope - libcontainer container b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107. May 15 01:10:59.181819 systemd[1]: cri-containerd-b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107.scope: Deactivated successfully. May 15 01:10:59.184853 containerd[1482]: time="2025-05-15T01:10:59.184791950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107\" id:\"b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107\" pid:4622 exited_at:{seconds:1747271459 nanos:183070290}" May 15 01:10:59.185516 containerd[1482]: time="2025-05-15T01:10:59.185360280Z" level=info msg="received exit event container_id:\"b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107\" id:\"b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107\" pid:4622 exited_at:{seconds:1747271459 nanos:183070290}" May 15 01:10:59.196071 containerd[1482]: time="2025-05-15T01:10:59.196031377Z" level=info msg="StartContainer for \"b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107\" returns successfully" May 15 01:10:59.210011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b48b64af7b50ea7cf6fc46fbbd83f13c1b6996376f8533095cb4816eaf6de107-rootfs.mount: Deactivated successfully. May 15 01:10:59.366502 sshd[4568]: Accepted publickey for core from 172.24.4.1 port 54846 ssh2: RSA SHA256:FgM6DNLOO9l7igabXcQRJCJ/iDzuk0CAYzdzDa1bmG0 May 15 01:10:59.367774 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:10:59.382381 systemd-logind[1462]: New session 29 of user core. May 15 01:10:59.387590 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 01:11:00.114330 containerd[1482]: time="2025-05-15T01:11:00.110441971Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 01:11:00.148653 containerd[1482]: time="2025-05-15T01:11:00.148194755Z" level=info msg="Container 8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd: CDI devices from CRI Config.CDIDevices: []" May 15 01:11:00.157842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836343337.mount: Deactivated successfully. May 15 01:11:00.177836 containerd[1482]: time="2025-05-15T01:11:00.177692338Z" level=info msg="CreateContainer within sandbox \"cb103d58f3740c536e1747f0543600eff1a8cc2a8e6cd9f2f2ff1c6223639483\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd\"" May 15 01:11:00.178502 containerd[1482]: time="2025-05-15T01:11:00.178463552Z" level=info msg="StartContainer for \"8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd\"" May 15 01:11:00.179674 containerd[1482]: time="2025-05-15T01:11:00.179598539Z" level=info msg="connecting to shim 8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd" address="unix:///run/containerd/s/e5efdb03c7c63e548a4c7bb031e89fd804ec78dad61302e842813867c471218f" protocol=ttrpc version=3 May 15 01:11:00.205517 systemd[1]: Started cri-containerd-8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd.scope - libcontainer container 8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd. May 15 01:11:00.255932 containerd[1482]: time="2025-05-15T01:11:00.255863509Z" level=info msg="StartContainer for \"8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd\" returns successfully" May 15 01:11:00.369368 containerd[1482]: time="2025-05-15T01:11:00.369226990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd\" id:\"5e4ae34cacd42dc3f05ffd1a42a5eada143e10a5011b776acf2a2ed86ef86ef1\" pid:4697 exited_at:{seconds:1747271460 nanos:368507945}" May 15 01:11:00.663329 kernel: cryptd: max_cpu_qlen set to 1000 May 15 01:11:00.727363 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 15 01:11:02.184502 containerd[1482]: time="2025-05-15T01:11:02.184448217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd\" id:\"3a44e8be3e94d19c18e111303a714671109774fc360fdc489d0aea78076cd316\" pid:4828 exit_status:1 exited_at:{seconds:1747271462 nanos:183631555}" May 15 01:11:04.160955 systemd-networkd[1381]: lxc_health: Link UP May 15 01:11:04.170108 systemd-networkd[1381]: lxc_health: Gained carrier May 15 01:11:04.280057 kubelet[2713]: I0515 01:11:04.279458 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-94l5k" podStartSLOduration=9.279419583 podStartE2EDuration="9.279419583s" podCreationTimestamp="2025-05-15 01:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:11:01.152567949 +0000 UTC m=+283.707000959" watchObservedRunningTime="2025-05-15 01:11:04.279419583 +0000 UTC m=+286.833852543" May 15 01:11:04.522553 containerd[1482]: time="2025-05-15T01:11:04.522496318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd\" id:\"11e174c19afecfe05d4135ba3345f08f1a32ffb88fae41ab90045fa2dc97867f\" pid:5268 exit_status:1 exited_at:{seconds:1747271464 nanos:520809854}" May 15 01:11:05.590435 systemd-networkd[1381]: lxc_health: Gained IPv6LL May 15 01:11:06.742376 containerd[1482]: time="2025-05-15T01:11:06.742311054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd\" id:\"f60100f6ce55b28b76ed559d39d6561cbc8fbb5021f1dcb74b316e309cee89f7\" pid:5304 exited_at:{seconds:1747271466 nanos:741255848}" May 15 01:11:06.746298 kubelet[2713]: E0515 01:11:06.746054 2713 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42774->127.0.0.1:46519: write tcp 127.0.0.1:42774->127.0.0.1:46519: write: connection reset by peer May 15 01:11:09.033414 containerd[1482]: time="2025-05-15T01:11:09.033230762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd\" id:\"c4fcae819066807592a5d6715a62c593ebf07cdaecae3b7f1a2c47e4478bd6c9\" pid:5332 exited_at:{seconds:1747271469 nanos:32734288}" May 15 01:11:11.064376 containerd[1482]: time="2025-05-15T01:11:11.062842884Z" level=warning msg="container event discarded" container=6edc5b687f3b79882894ca224d3869ed9ed3e52c65ef6371abe44245df1693f5 type=CONTAINER_CREATED_EVENT May 15 01:11:11.064376 containerd[1482]: time="2025-05-15T01:11:11.063023367Z" level=warning msg="container event discarded" container=6edc5b687f3b79882894ca224d3869ed9ed3e52c65ef6371abe44245df1693f5 type=CONTAINER_STARTED_EVENT May 15 01:11:11.110436 containerd[1482]: time="2025-05-15T01:11:11.110234146Z" level=warning msg="container event discarded" container=16f937ec5215c4a3893e3d5512a15a2e0122fa07d7bb261389bece0cbe1d5773 type=CONTAINER_CREATED_EVENT May 15 01:11:11.110436 containerd[1482]: time="2025-05-15T01:11:11.110362731Z" level=warning msg="container event discarded" container=b1c1dba16899fd5fd435ac2233ad053414b5caa1f805f150496f60201bb22ba9 type=CONTAINER_CREATED_EVENT May 15 01:11:11.110904 containerd[1482]: time="2025-05-15T01:11:11.110454575Z" level=warning msg="container event discarded" container=b1c1dba16899fd5fd435ac2233ad053414b5caa1f805f150496f60201bb22ba9 type=CONTAINER_STARTED_EVENT May 15 01:11:11.110904 containerd[1482]: time="2025-05-15T01:11:11.110524569Z" level=warning msg="container event discarded" container=604c90b90f48fbe28573647ac389da3fd3b92c341770517fcedd6b7f387c87f2 type=CONTAINER_CREATED_EVENT May 15 01:11:11.110904 containerd[1482]: time="2025-05-15T01:11:11.110548034Z" level=warning msg="container event discarded" container=604c90b90f48fbe28573647ac389da3fd3b92c341770517fcedd6b7f387c87f2 type=CONTAINER_STARTED_EVENT May 15 01:11:11.157577 containerd[1482]: time="2025-05-15T01:11:11.157327121Z" level=warning msg="container event discarded" container=3d9cf190aff7f0c3e6a1dbf92fe477c095ba7e06ff1c39d9e408820ad11edbd9 type=CONTAINER_CREATED_EVENT May 15 01:11:11.181043 containerd[1482]: time="2025-05-15T01:11:11.178823400Z" level=warning msg="container event discarded" container=838f1732c67dc2dc13b5cc940fb8dd9869a72ebae0973476e8d4c36a8440f2cc type=CONTAINER_CREATED_EVENT May 15 01:11:11.257433 containerd[1482]: time="2025-05-15T01:11:11.257352389Z" level=warning msg="container event discarded" container=16f937ec5215c4a3893e3d5512a15a2e0122fa07d7bb261389bece0cbe1d5773 type=CONTAINER_STARTED_EVENT May 15 01:11:11.316989 containerd[1482]: time="2025-05-15T01:11:11.316303759Z" level=warning msg="container event discarded" container=3d9cf190aff7f0c3e6a1dbf92fe477c095ba7e06ff1c39d9e408820ad11edbd9 type=CONTAINER_STARTED_EVENT May 15 01:11:11.316989 containerd[1482]: time="2025-05-15T01:11:11.316593541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c0b424c5a4ada71965dafc49d9345ef53019c26178f37e2e68eb37630d2b5fd\" id:\"c1e2b968108a1dfa830b0a6ca2cf2a973c9ff83df883ae78f65df1f096f401d1\" pid:5361 exited_at:{seconds:1747271471 nanos:316202187}" May 15 01:11:11.320546 kubelet[2713]: E0515 01:11:11.320499 2713 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:42780->127.0.0.1:46519: write tcp 172.24.4.204:10250->172.24.4.204:48996: write: broken pipe May 15 01:11:11.321218 kubelet[2713]: E0515 01:11:11.320609 2713 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42780->127.0.0.1:46519: write tcp 127.0.0.1:42780->127.0.0.1:46519: write: broken pipe May 15 01:11:11.336134 containerd[1482]: time="2025-05-15T01:11:11.336071359Z" level=warning msg="container event discarded" container=838f1732c67dc2dc13b5cc940fb8dd9869a72ebae0973476e8d4c36a8440f2cc type=CONTAINER_STARTED_EVENT May 15 01:11:11.557443 sshd[4646]: Connection closed by 172.24.4.1 port 54846 May 15 01:11:11.561540 sshd-session[4568]: pam_unix(sshd:session): session closed for user core May 15 01:11:11.582913 systemd[1]: sshd@26-172.24.4.204:22-172.24.4.1:54846.service: Deactivated successfully. May 15 01:11:11.598073 systemd[1]: session-29.scope: Deactivated successfully. May 15 01:11:11.604103 systemd-logind[1462]: Session 29 logged out. Waiting for processes to exit. May 15 01:11:11.609878 systemd-logind[1462]: Removed session 29. May 15 01:11:17.678228 containerd[1482]: time="2025-05-15T01:11:17.676722542Z" level=info msg="StopPodSandbox for \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\"" May 15 01:11:17.678228 containerd[1482]: time="2025-05-15T01:11:17.677943105Z" level=info msg="TearDown network for sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" successfully" May 15 01:11:17.678228 containerd[1482]: time="2025-05-15T01:11:17.677993161Z" level=info msg="StopPodSandbox for \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" returns successfully" May 15 01:11:17.680620 containerd[1482]: time="2025-05-15T01:11:17.680477659Z" level=info msg="RemovePodSandbox for \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\"" May 15 01:11:17.680856 containerd[1482]: time="2025-05-15T01:11:17.680638275Z" level=info msg="Forcibly stopping sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\"" May 15 01:11:17.681001 containerd[1482]: time="2025-05-15T01:11:17.680936222Z" level=info msg="TearDown network for sandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" successfully" May 15 01:11:17.687401 containerd[1482]: time="2025-05-15T01:11:17.687208620Z" level=info msg="Ensure that sandbox d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a in task-service has been cleanup successfully" May 15 01:11:17.696353 containerd[1482]: time="2025-05-15T01:11:17.696241623Z" level=info msg="RemovePodSandbox \"d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a\" returns successfully" May 15 01:11:17.698601 containerd[1482]: time="2025-05-15T01:11:17.697616980Z" level=info msg="StopPodSandbox for \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\"" May 15 01:11:17.698601 containerd[1482]: time="2025-05-15T01:11:17.697960565Z" level=info msg="TearDown network for sandbox \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" successfully" May 15 01:11:17.698601 containerd[1482]: time="2025-05-15T01:11:17.698025829Z" level=info msg="StopPodSandbox for \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" returns successfully" May 15 01:11:17.699263 containerd[1482]: time="2025-05-15T01:11:17.699198771Z" level=info msg="RemovePodSandbox for \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\"" May 15 01:11:17.700394 containerd[1482]: time="2025-05-15T01:11:17.699631414Z" level=info msg="Forcibly stopping sandbox \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\"" May 15 01:11:17.700394 containerd[1482]: time="2025-05-15T01:11:17.699964960Z" level=info msg="TearDown network for sandbox \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" successfully" May 15 01:11:17.703719 containerd[1482]: time="2025-05-15T01:11:17.703657307Z" level=info msg="Ensure that sandbox db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba in task-service has been cleanup successfully" May 15 01:11:17.709806 containerd[1482]: time="2025-05-15T01:11:17.709747699Z" level=info msg="RemovePodSandbox \"db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba\" returns successfully" May 15 01:11:24.329844 containerd[1482]: time="2025-05-15T01:11:24.329536513Z" level=warning msg="container event discarded" container=6a4764f1f2f26d3176251a2c40a91c8f82bf9d67a95d3ce08f7d4fd5983b0bb4 type=CONTAINER_CREATED_EVENT May 15 01:11:24.330871 containerd[1482]: time="2025-05-15T01:11:24.330540455Z" level=warning msg="container event discarded" container=6a4764f1f2f26d3176251a2c40a91c8f82bf9d67a95d3ce08f7d4fd5983b0bb4 type=CONTAINER_STARTED_EVENT May 15 01:11:24.349175 containerd[1482]: time="2025-05-15T01:11:24.349025091Z" level=warning msg="container event discarded" container=d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a type=CONTAINER_CREATED_EVENT May 15 01:11:24.349175 containerd[1482]: time="2025-05-15T01:11:24.349117898Z" level=warning msg="container event discarded" container=d9390dfaf65a87619c84f0612a962b722996ca6161c029cbc2b41a0170349a1a type=CONTAINER_STARTED_EVENT May 15 01:11:24.415635 containerd[1482]: time="2025-05-15T01:11:24.415483474Z" level=warning msg="container event discarded" container=db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba type=CONTAINER_CREATED_EVENT May 15 01:11:24.415635 containerd[1482]: time="2025-05-15T01:11:24.415597401Z" level=warning msg="container event discarded" container=db23e80c84fdde99996535a9633855bdce6d049d5c0a5ed8f3ceda9cf64d6bba type=CONTAINER_STARTED_EVENT May 15 01:11:24.493030 containerd[1482]: time="2025-05-15T01:11:24.492862026Z" level=warning msg="container event discarded" container=7afc88b2bea1334f2601ee8199e65097bbf2515f409ab736ae0857e3dfcd8507 type=CONTAINER_CREATED_EVENT May 15 01:11:24.652453 containerd[1482]: time="2025-05-15T01:11:24.652152243Z" level=warning msg="container event discarded" container=7afc88b2bea1334f2601ee8199e65097bbf2515f409ab736ae0857e3dfcd8507 type=CONTAINER_STARTED_EVENT May 15 01:11:32.941405 containerd[1482]: time="2025-05-15T01:11:32.941199330Z" level=warning msg="container event discarded" container=ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8 type=CONTAINER_CREATED_EVENT May 15 01:11:33.047857 containerd[1482]: time="2025-05-15T01:11:33.047726059Z" level=warning msg="container event discarded" container=ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8 type=CONTAINER_STARTED_EVENT May 15 01:11:34.671954 containerd[1482]: time="2025-05-15T01:11:34.671821815Z" level=warning msg="container event discarded" container=ac11d2fa667b4975bef6f2275750c13f812e5d1eb0c8a492b259925ac26fb1f8 type=CONTAINER_STOPPED_EVENT May 15 01:11:35.025383 containerd[1482]: time="2025-05-15T01:11:35.025216298Z" level=warning msg="container event discarded" container=fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190 type=CONTAINER_CREATED_EVENT May 15 01:11:35.160404 containerd[1482]: time="2025-05-15T01:11:35.160072001Z" level=warning msg="container event discarded" container=fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190 type=CONTAINER_STARTED_EVENT May 15 01:11:35.253712 containerd[1482]: time="2025-05-15T01:11:35.253557121Z" level=warning msg="container event discarded" container=fb3acaa8a988841c1befd658615d3ce6eeab18d5cfc3d7a6fd6a65b1b3971190 type=CONTAINER_STOPPED_EVENT May 15 01:11:35.822532 containerd[1482]: time="2025-05-15T01:11:35.822362306Z" level=warning msg="container event discarded" container=13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1 type=CONTAINER_CREATED_EVENT May 15 01:11:35.939961 containerd[1482]: time="2025-05-15T01:11:35.939789907Z" level=warning msg="container event discarded" container=13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1 type=CONTAINER_STARTED_EVENT May 15 01:11:35.996544 containerd[1482]: time="2025-05-15T01:11:35.996410331Z" level=warning msg="container event discarded" container=13bd8c81138bdb9b204114b7d6d63ecd383a0b99d8a84cb05c135b2bc88d22b1 type=CONTAINER_STOPPED_EVENT May 15 01:11:36.817667 containerd[1482]: time="2025-05-15T01:11:36.817425177Z" level=warning msg="container event discarded" container=386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7 type=CONTAINER_CREATED_EVENT May 15 01:11:36.920046 containerd[1482]: time="2025-05-15T01:11:36.919910226Z" level=warning msg="container event discarded" container=386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7 type=CONTAINER_STARTED_EVENT May 15 01:11:37.129743 containerd[1482]: time="2025-05-15T01:11:37.129407777Z" level=warning msg="container event discarded" container=386cb6ac14c0550aa9591ea5babd5cde59e9487e8e223b8984b4495db147a5e7 type=CONTAINER_STOPPED_EVENT May 15 01:11:37.430399 containerd[1482]: time="2025-05-15T01:11:37.429768235Z" level=warning msg="container event discarded" container=8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3 type=CONTAINER_CREATED_EVENT May 15 01:11:37.499812 containerd[1482]: time="2025-05-15T01:11:37.499684846Z" level=warning msg="container event discarded" container=8dbbabd422b6467ef09814a72f76daea6719b7b404962b37c351e344c064f5f3 type=CONTAINER_STARTED_EVENT May 15 01:11:37.837419 containerd[1482]: time="2025-05-15T01:11:37.837254786Z" level=warning msg="container event discarded" container=fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d type=CONTAINER_CREATED_EVENT May 15 01:11:38.009524 containerd[1482]: time="2025-05-15T01:11:38.009397750Z" level=warning msg="container event discarded" container=fbac1840d41a05b579ee6a03807f157b2065b6453318e1ad5fb017227022277d type=CONTAINER_STARTED_EVENT May 15 01:11:48.092334 containerd[1482]: time="2025-05-15T01:11:48.092084676Z" level=warning msg="container event discarded" container=ff10118cd200e273865facf0d3fe1306d5cc46466cff8134bdaac7584e6b4983 type=CONTAINER_CREATED_EVENT May 15 01:11:48.092334 containerd[1482]: time="2025-05-15T01:11:48.092224592Z" level=warning msg="container event discarded" container=ff10118cd200e273865facf0d3fe1306d5cc46466cff8134bdaac7584e6b4983 type=CONTAINER_STARTED_EVENT May 15 01:11:48.135810 containerd[1482]: time="2025-05-15T01:11:48.135687820Z" level=warning msg="container event discarded" container=866f6d8f56f2669314af008af413fbbe123bcb14ee85bdb62520b37c31509a6f type=CONTAINER_CREATED_EVENT May 15 01:11:48.148329 containerd[1482]: time="2025-05-15T01:11:48.148115301Z" level=warning msg="container event discarded" container=a21300f8fc7d1aa3c55c7fdaec067b1c8902ac411233f151032601af91382ae9 type=CONTAINER_CREATED_EVENT May 15 01:11:48.148329 containerd[1482]: time="2025-05-15T01:11:48.148186626Z" level=warning msg="container event discarded" container=a21300f8fc7d1aa3c55c7fdaec067b1c8902ac411233f151032601af91382ae9 type=CONTAINER_STARTED_EVENT May 15 01:11:48.182708 containerd[1482]: time="2025-05-15T01:11:48.182587152Z" level=warning msg="container event discarded" container=c5fca3c62b640f706a7a65da0a1a5abbef669a93b292a4cbfa8b509b79f9b07c type=CONTAINER_CREATED_EVENT May 15 01:11:48.214093 containerd[1482]: time="2025-05-15T01:11:48.213993715Z" level=warning msg="container event discarded" container=866f6d8f56f2669314af008af413fbbe123bcb14ee85bdb62520b37c31509a6f type=CONTAINER_STARTED_EVENT May 15 01:11:48.271517 containerd[1482]: time="2025-05-15T01:11:48.271382772Z" level=warning msg="container event discarded" container=c5fca3c62b640f706a7a65da0a1a5abbef669a93b292a4cbfa8b509b79f9b07c type=CONTAINER_STARTED_EVENT May 15 01:12:24.616365 update_engine[1463]: I20250515 01:12:24.615226 1463 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 01:12:24.616365 update_engine[1463]: I20250515 01:12:24.615630 1463 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 01:12:24.619200 update_engine[1463]: I20250515 01:12:24.617785 1463 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 01:12:24.619717 update_engine[1463]: I20250515 01:12:24.619212 1463 omaha_request_params.cc:62] Current group set to alpha May 15 01:12:24.621994 update_engine[1463]: I20250515 01:12:24.621877 1463 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 01:12:24.621994 update_engine[1463]: I20250515 01:12:24.621925 1463 update_attempter.cc:643] Scheduling an action processor start. May 15 01:12:24.622260 update_engine[1463]: I20250515 01:12:24.621989 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 01:12:24.622260 update_engine[1463]: I20250515 01:12:24.622200 1463 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 01:12:24.623103 update_engine[1463]: I20250515 01:12:24.623030 1463 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 01:12:24.623103 update_engine[1463]: I20250515 01:12:24.623078 1463 omaha_request_action.cc:272] Request: May 15 01:12:24.623103 update_engine[1463]: May 15 01:12:24.623103 update_engine[1463]: May 15 01:12:24.623103 update_engine[1463]: May 15 01:12:24.623103 update_engine[1463]: May 15 01:12:24.623103 update_engine[1463]: May 15 01:12:24.623103 update_engine[1463]: May 15 01:12:24.623103 update_engine[1463]: May 15 01:12:24.623103 update_engine[1463]: May 15 01:12:24.623103 update_engine[1463]: I20250515 01:12:24.623106 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:12:24.630053 locksmithd[1476]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 01:12:24.631148 update_engine[1463]: I20250515 01:12:24.631065 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:12:24.632453 update_engine[1463]: I20250515 01:12:24.632251 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:12:24.639716 update_engine[1463]: E20250515 01:12:24.639608 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:12:24.639890 update_engine[1463]: I20250515 01:12:24.639847 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 01:12:34.551084 update_engine[1463]: I20250515 01:12:34.550860 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:12:34.552186 update_engine[1463]: I20250515 01:12:34.551474 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:12:34.552186 update_engine[1463]: I20250515 01:12:34.551978 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:12:34.557220 update_engine[1463]: E20250515 01:12:34.557137 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:12:34.557450 update_engine[1463]: I20250515 01:12:34.557369 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 01:12:44.553068 update_engine[1463]: I20250515 01:12:44.551780 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:12:44.555448 update_engine[1463]: I20250515 01:12:44.554981 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:12:44.556165 update_engine[1463]: I20250515 01:12:44.555845 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:12:44.561518 update_engine[1463]: E20250515 01:12:44.561414 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:12:44.561685 update_engine[1463]: I20250515 01:12:44.561595 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 01:12:54.549749 update_engine[1463]: I20250515 01:12:54.549232 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:12:54.554326 update_engine[1463]: I20250515 01:12:54.552860 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:12:54.554326 update_engine[1463]: I20250515 01:12:54.554046 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:12:54.560446 update_engine[1463]: E20250515 01:12:54.559429 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:12:54.560446 update_engine[1463]: I20250515 01:12:54.559614 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 01:12:54.560446 update_engine[1463]: I20250515 01:12:54.559669 1463 omaha_request_action.cc:617] Omaha request response: May 15 01:12:54.560446 update_engine[1463]: E20250515 01:12:54.560071 1463 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 01:12:54.561082 update_engine[1463]: I20250515 01:12:54.560678 1463 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 01:12:54.561082 update_engine[1463]: I20250515 01:12:54.560704 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 01:12:54.561082 update_engine[1463]: I20250515 01:12:54.560724 1463 update_attempter.cc:306] Processing Done. May 15 01:12:54.561082 update_engine[1463]: E20250515 01:12:54.560842 1463 update_attempter.cc:619] Update failed. May 15 01:12:54.561082 update_engine[1463]: I20250515 01:12:54.560878 1463 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 01:12:54.561082 update_engine[1463]: I20250515 01:12:54.560891 1463 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 01:12:54.561082 update_engine[1463]: I20250515 01:12:54.560909 1463 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 01:12:54.561754 update_engine[1463]: I20250515 01:12:54.561542 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 01:12:54.561754 update_engine[1463]: I20250515 01:12:54.561714 1463 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 01:12:54.561754 update_engine[1463]: I20250515 01:12:54.561738 1463 omaha_request_action.cc:272] Request: May 15 01:12:54.561754 update_engine[1463]: May 15 01:12:54.561754 update_engine[1463]: May 15 01:12:54.561754 update_engine[1463]: May 15 01:12:54.561754 update_engine[1463]: May 15 01:12:54.561754 update_engine[1463]: May 15 01:12:54.561754 update_engine[1463]: May 15 01:12:54.561754 update_engine[1463]: I20250515 01:12:54.561753 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:12:54.562549 update_engine[1463]: I20250515 01:12:54.562047 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:12:54.562549 update_engine[1463]: I20250515 01:12:54.562476 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:12:54.567911 locksmithd[1476]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 01:12:54.569372 locksmithd[1476]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 01:12:54.569585 update_engine[1463]: E20250515 01:12:54.567924 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:12:54.569585 update_engine[1463]: I20250515 01:12:54.568064 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 01:12:54.569585 update_engine[1463]: I20250515 01:12:54.568094 1463 omaha_request_action.cc:617] Omaha request response: May 15 01:12:54.569585 update_engine[1463]: I20250515 01:12:54.568109 1463 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 01:12:54.569585 update_engine[1463]: I20250515 01:12:54.568120 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 01:12:54.569585 update_engine[1463]: I20250515 01:12:54.568132 1463 update_attempter.cc:306] Processing Done. May 15 01:12:54.569585 update_engine[1463]: I20250515 01:12:54.568145 1463 update_attempter.cc:310] Error event sent. May 15 01:12:54.569585 update_engine[1463]: I20250515 01:12:54.568183 1463 update_check_scheduler.cc:74] Next update check in 42m57s