May 10 10:44:00.084788 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat May 10 08:33:52 -00 2025 May 10 10:44:00.084825 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 10:44:00.084840 kernel: BIOS-provided physical RAM map: May 10 10:44:00.084849 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 10:44:00.084857 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 10:44:00.084866 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 10:44:00.084876 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 10 10:44:00.084884 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 10 10:44:00.084893 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 10:44:00.084901 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 10:44:00.084910 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 10 10:44:00.084921 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 10 10:44:00.084930 kernel: NX (Execute Disable) protection: active May 10 10:44:00.084939 kernel: APIC: Static calls initialized May 10 10:44:00.084949 kernel: SMBIOS 3.0.0 present. May 10 10:44:00.084958 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 10 10:44:00.084970 kernel: Hypervisor detected: KVM May 10 10:44:00.084979 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 10:44:00.084987 kernel: kvm-clock: using sched offset of 4693863627 cycles May 10 10:44:00.084997 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 10:44:00.085006 kernel: tsc: Detected 1996.249 MHz processor May 10 10:44:00.085016 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 10:44:00.085026 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 10:44:00.085036 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 10 10:44:00.085045 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 10 10:44:00.085055 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 10:44:00.085067 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 10 10:44:00.085076 kernel: ACPI: Early table checksum verification disabled May 10 10:44:00.085086 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 10 10:44:00.085095 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 10:44:00.085105 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 10:44:00.085114 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 10:44:00.085124 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 10 10:44:00.085133 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 10:44:00.085142 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 10:44:00.085154 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 10 10:44:00.085163 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 10 10:44:00.085173 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 10 10:44:00.085182 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 10 10:44:00.085192 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 10 10:44:00.085205 kernel: No NUMA configuration found May 10 10:44:00.085217 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 10 10:44:00.085227 kernel: NODE_DATA(0) allocated [mem 0x13fff5000-0x13fffcfff] May 10 10:44:00.085237 kernel: Zone ranges: May 10 10:44:00.085247 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 10:44:00.085256 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 10 10:44:00.085266 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 10 10:44:00.085276 kernel: Device empty May 10 10:44:00.085285 kernel: Movable zone start for each node May 10 10:44:00.085297 kernel: Early memory node ranges May 10 10:44:00.085308 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 10:44:00.085317 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 10 10:44:00.085327 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 10 10:44:00.085337 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 10 10:44:00.085346 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 10:44:00.085356 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 10:44:00.085366 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 10 10:44:00.085376 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 10:44:00.085386 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 10:44:00.085398 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 10:44:00.085408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 10:44:00.085417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 10:44:00.085427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 10:44:00.085437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 10:44:00.085447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 10:44:00.085456 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 10:44:00.085466 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 10 10:44:00.085476 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 10 10:44:00.085489 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 10 10:44:00.085499 kernel: Booting paravirtualized kernel on KVM May 10 10:44:00.085509 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 10:44:00.085519 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 10 10:44:00.085529 kernel: percpu: Embedded 58 pages/cpu s197416 r8192 d31960 u1048576 May 10 10:44:00.085538 kernel: pcpu-alloc: s197416 r8192 d31960 u1048576 alloc=1*2097152 May 10 10:44:00.085548 kernel: pcpu-alloc: [0] 0 1 May 10 10:44:00.085557 kernel: kvm-guest: PV spinlocks disabled, no host support May 10 10:44:00.085569 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 10:44:00.085581 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 10:44:00.085604 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 10:44:00.085614 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 10:44:00.085624 kernel: Fallback order for Node 0: 0 May 10 10:44:00.085634 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 10 10:44:00.085644 kernel: Policy zone: Normal May 10 10:44:00.085654 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 10:44:00.085668 kernel: software IO TLB: area num 2. May 10 10:44:00.085679 kernel: Memory: 3968244K/4193772K available (14336K kernel code, 2309K rwdata, 9044K rodata, 53680K init, 1596K bss, 225268K reserved, 0K cma-reserved) May 10 10:44:00.085720 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 10:44:00.085732 kernel: ftrace: allocating 38190 entries in 150 pages May 10 10:44:00.085743 kernel: ftrace: allocated 150 pages with 4 groups May 10 10:44:00.085753 kernel: Dynamic Preempt: voluntary May 10 10:44:00.085764 kernel: rcu: Preemptible hierarchical RCU implementation. May 10 10:44:00.085775 kernel: rcu: RCU event tracing is enabled. May 10 10:44:00.085786 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 10:44:00.085796 kernel: Trampoline variant of Tasks RCU enabled. May 10 10:44:00.085810 kernel: Rude variant of Tasks RCU enabled. May 10 10:44:00.085820 kernel: Tracing variant of Tasks RCU enabled. May 10 10:44:00.085831 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 10:44:00.085841 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 10:44:00.085852 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 10 10:44:00.085863 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 10 10:44:00.085873 kernel: Console: colour VGA+ 80x25 May 10 10:44:00.085884 kernel: printk: console [tty0] enabled May 10 10:44:00.085894 kernel: printk: console [ttyS0] enabled May 10 10:44:00.085907 kernel: ACPI: Core revision 20230628 May 10 10:44:00.085918 kernel: APIC: Switch to symmetric I/O mode setup May 10 10:44:00.085928 kernel: x2apic enabled May 10 10:44:00.085939 kernel: APIC: Switched APIC routing to: physical x2apic May 10 10:44:00.085949 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 10 10:44:00.085960 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 10 10:44:00.085971 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 10 10:44:00.085982 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 10 10:44:00.085992 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 10 10:44:00.086005 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 10:44:00.086016 kernel: Spectre V2 : Mitigation: Retpolines May 10 10:44:00.086026 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 10:44:00.086037 kernel: Speculative Store Bypass: Vulnerable May 10 10:44:00.086047 kernel: x86/fpu: x87 FPU will use FXSAVE May 10 10:44:00.086058 kernel: Freeing SMP alternatives memory: 32K May 10 10:44:00.086075 kernel: pid_max: default: 32768 minimum: 301 May 10 10:44:00.086088 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 10 10:44:00.086099 kernel: landlock: Up and running. May 10 10:44:00.086111 kernel: SELinux: Initializing. May 10 10:44:00.086122 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 10:44:00.086133 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 10:44:00.086146 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 10 10:44:00.086158 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 10:44:00.086169 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 10:44:00.086181 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 10:44:00.086193 kernel: Performance Events: AMD PMU driver. May 10 10:44:00.086205 kernel: ... version: 0 May 10 10:44:00.086216 kernel: ... bit width: 48 May 10 10:44:00.086227 kernel: ... generic registers: 4 May 10 10:44:00.086237 kernel: ... value mask: 0000ffffffffffff May 10 10:44:00.086248 kernel: ... max period: 00007fffffffffff May 10 10:44:00.086259 kernel: ... fixed-purpose events: 0 May 10 10:44:00.086270 kernel: ... event mask: 000000000000000f May 10 10:44:00.086281 kernel: signal: max sigframe size: 1440 May 10 10:44:00.086294 kernel: rcu: Hierarchical SRCU implementation. May 10 10:44:00.086305 kernel: rcu: Max phase no-delay instances is 400. May 10 10:44:00.086316 kernel: smp: Bringing up secondary CPUs ... May 10 10:44:00.086327 kernel: smpboot: x86: Booting SMP configuration: May 10 10:44:00.086338 kernel: .... node #0, CPUs: #1 May 10 10:44:00.086349 kernel: smp: Brought up 1 node, 2 CPUs May 10 10:44:00.086359 kernel: smpboot: Max logical packages: 2 May 10 10:44:00.086371 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 10 10:44:00.086382 kernel: devtmpfs: initialized May 10 10:44:00.086392 kernel: x86/mm: Memory block size: 128MB May 10 10:44:00.086405 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 10:44:00.086417 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 10:44:00.086428 kernel: pinctrl core: initialized pinctrl subsystem May 10 10:44:00.086439 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 10:44:00.086450 kernel: audit: initializing netlink subsys (disabled) May 10 10:44:00.086461 kernel: audit: type=2000 audit(1746873836.066:1): state=initialized audit_enabled=0 res=1 May 10 10:44:00.086472 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 10:44:00.086483 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 10:44:00.086494 kernel: cpuidle: using governor menu May 10 10:44:00.086508 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 10:44:00.086519 kernel: dca service started, version 1.12.1 May 10 10:44:00.086530 kernel: PCI: Using configuration type 1 for base access May 10 10:44:00.086541 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 10:44:00.086553 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 10 10:44:00.086563 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 10 10:44:00.086574 kernel: ACPI: Added _OSI(Module Device) May 10 10:44:00.086585 kernel: ACPI: Added _OSI(Processor Device) May 10 10:44:00.086596 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 10:44:00.086610 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 10:44:00.086621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 10:44:00.086632 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 10 10:44:00.086643 kernel: ACPI: Interpreter enabled May 10 10:44:00.086654 kernel: ACPI: PM: (supports S0 S3 S5) May 10 10:44:00.086665 kernel: ACPI: Using IOAPIC for interrupt routing May 10 10:44:00.086677 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 10:44:00.086700 kernel: PCI: Using E820 reservations for host bridge windows May 10 10:44:00.086712 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 10 10:44:00.086727 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 10:44:00.086901 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 10 10:44:00.087015 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 10 10:44:00.087121 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 10 10:44:00.087138 kernel: acpiphp: Slot [3] registered May 10 10:44:00.087149 kernel: acpiphp: Slot [4] registered May 10 10:44:00.087160 kernel: acpiphp: Slot [5] registered May 10 10:44:00.087178 kernel: acpiphp: Slot [6] registered May 10 10:44:00.087191 kernel: acpiphp: Slot [7] registered May 10 10:44:00.087203 kernel: acpiphp: Slot [8] registered May 10 10:44:00.087214 kernel: acpiphp: Slot [9] registered May 10 10:44:00.087225 kernel: acpiphp: Slot [10] registered May 10 10:44:00.087237 kernel: acpiphp: Slot [11] registered May 10 10:44:00.087248 kernel: acpiphp: Slot [12] registered May 10 10:44:00.087259 kernel: acpiphp: Slot [13] registered May 10 10:44:00.087270 kernel: acpiphp: Slot [14] registered May 10 10:44:00.087285 kernel: acpiphp: Slot [15] registered May 10 10:44:00.087296 kernel: acpiphp: Slot [16] registered May 10 10:44:00.087307 kernel: acpiphp: Slot [17] registered May 10 10:44:00.087318 kernel: acpiphp: Slot [18] registered May 10 10:44:00.087331 kernel: acpiphp: Slot [19] registered May 10 10:44:00.087342 kernel: acpiphp: Slot [20] registered May 10 10:44:00.087353 kernel: acpiphp: Slot [21] registered May 10 10:44:00.087363 kernel: acpiphp: Slot [22] registered May 10 10:44:00.087373 kernel: acpiphp: Slot [23] registered May 10 10:44:00.087383 kernel: acpiphp: Slot [24] registered May 10 10:44:00.087396 kernel: acpiphp: Slot [25] registered May 10 10:44:00.087407 kernel: acpiphp: Slot [26] registered May 10 10:44:00.087417 kernel: acpiphp: Slot [27] registered May 10 10:44:00.087428 kernel: acpiphp: Slot [28] registered May 10 10:44:00.087438 kernel: acpiphp: Slot [29] registered May 10 10:44:00.087448 kernel: acpiphp: Slot [30] registered May 10 10:44:00.087458 kernel: acpiphp: Slot [31] registered May 10 10:44:00.087469 kernel: PCI host bridge to bus 0000:00 May 10 10:44:00.087580 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 10:44:00.087680 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 10:44:00.089825 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 10:44:00.089923 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 10:44:00.090018 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 10 10:44:00.090116 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 10:44:00.090249 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 10 10:44:00.090389 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 10 10:44:00.090509 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 10 10:44:00.090619 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 10 10:44:00.090753 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 10 10:44:00.090861 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 10 10:44:00.090968 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 10 10:44:00.091076 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 10 10:44:00.091200 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 10 10:44:00.091320 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 10 10:44:00.091422 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 10 10:44:00.091533 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 10 10:44:00.091635 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 10 10:44:00.091757 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 10 10:44:00.091869 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 10 10:44:00.091970 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 10 10:44:00.092071 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 10:44:00.092182 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 10 10:44:00.092284 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 10 10:44:00.092384 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 10 10:44:00.092483 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 10 10:44:00.092589 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 10 10:44:00.092717 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 10 10:44:00.092826 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 10 10:44:00.092926 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 10 10:44:00.093026 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 10 10:44:00.093139 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 10 10:44:00.093240 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 10 10:44:00.093347 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 10 10:44:00.093455 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 10 10:44:00.093556 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 10 10:44:00.093821 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 10 10:44:00.093933 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 10 10:44:00.093949 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 10:44:00.093961 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 10:44:00.093973 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 10:44:00.093988 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 10:44:00.093999 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 10 10:44:00.094011 kernel: iommu: Default domain type: Translated May 10 10:44:00.094022 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 10:44:00.094033 kernel: PCI: Using ACPI for IRQ routing May 10 10:44:00.094044 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 10:44:00.094055 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 10:44:00.094067 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 10 10:44:00.094173 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 10 10:44:00.094287 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 10 10:44:00.094394 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 10:44:00.094410 kernel: vgaarb: loaded May 10 10:44:00.094421 kernel: clocksource: Switched to clocksource kvm-clock May 10 10:44:00.094432 kernel: VFS: Disk quotas dquot_6.6.0 May 10 10:44:00.094443 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 10:44:00.094454 kernel: pnp: PnP ACPI init May 10 10:44:00.094567 kernel: pnp 00:03: [dma 2] May 10 10:44:00.094589 kernel: pnp: PnP ACPI: found 5 devices May 10 10:44:00.094600 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 10:44:00.094612 kernel: NET: Registered PF_INET protocol family May 10 10:44:00.094623 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 10:44:00.094634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 10:44:00.094645 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 10:44:00.094656 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 10:44:00.094668 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 10 10:44:00.094679 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 10:44:00.094795 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 10:44:00.094809 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 10:44:00.094821 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 10:44:00.094832 kernel: NET: Registered PF_XDP protocol family May 10 10:44:00.094931 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 10:44:00.095022 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 10:44:00.095116 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 10:44:00.095206 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 10 10:44:00.095304 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 10 10:44:00.095409 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 10 10:44:00.095509 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 10 10:44:00.095524 kernel: PCI: CLS 0 bytes, default 64 May 10 10:44:00.095534 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 10 10:44:00.095545 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 10 10:44:00.095556 kernel: Initialise system trusted keyrings May 10 10:44:00.095566 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 10:44:00.095580 kernel: Key type asymmetric registered May 10 10:44:00.095591 kernel: Asymmetric key parser 'x509' registered May 10 10:44:00.095601 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 10 10:44:00.095611 kernel: io scheduler mq-deadline registered May 10 10:44:00.095622 kernel: io scheduler kyber registered May 10 10:44:00.095632 kernel: io scheduler bfq registered May 10 10:44:00.095642 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 10:44:00.095653 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 10 10:44:00.095664 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 10 10:44:00.095677 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 10 10:44:00.095756 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 10 10:44:00.095769 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 10:44:00.095780 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 10:44:00.095791 kernel: random: crng init done May 10 10:44:00.095801 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 10:44:00.095811 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 10:44:00.095822 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 10:44:00.095922 kernel: rtc_cmos 00:04: RTC can wake from S4 May 10 10:44:00.095943 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 10:44:00.096032 kernel: rtc_cmos 00:04: registered as rtc0 May 10 10:44:00.096120 kernel: rtc_cmos 00:04: setting system clock to 2025-05-10T10:43:59 UTC (1746873839) May 10 10:44:00.096208 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 10 10:44:00.096223 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 10 10:44:00.096233 kernel: NET: Registered PF_INET6 protocol family May 10 10:44:00.096243 kernel: Segment Routing with IPv6 May 10 10:44:00.096253 kernel: In-situ OAM (IOAM) with IPv6 May 10 10:44:00.096268 kernel: NET: Registered PF_PACKET protocol family May 10 10:44:00.096279 kernel: Key type dns_resolver registered May 10 10:44:00.096290 kernel: IPI shorthand broadcast: enabled May 10 10:44:00.096300 kernel: sched_clock: Marking stable (3585007960, 182875546)->(3799915210, -32031704) May 10 10:44:00.096311 kernel: registered taskstats version 1 May 10 10:44:00.096321 kernel: Loading compiled-in X.509 certificates May 10 10:44:00.096332 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: f8080549509982706805ea0b811f8f4bcb4a274e' May 10 10:44:00.096342 kernel: Key type .fscrypt registered May 10 10:44:00.096352 kernel: Key type fscrypt-provisioning registered May 10 10:44:00.096365 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 10:44:00.096375 kernel: ima: Allocated hash algorithm: sha1 May 10 10:44:00.096386 kernel: ima: No architecture policies found May 10 10:44:00.096396 kernel: clk: Disabling unused clocks May 10 10:44:00.096406 kernel: Warning: unable to open an initial console. May 10 10:44:00.096417 kernel: Freeing unused kernel image (initmem) memory: 53680K May 10 10:44:00.096427 kernel: Write protecting the kernel read-only data: 24576k May 10 10:44:00.096437 kernel: Freeing unused kernel image (rodata/data gap) memory: 1196K May 10 10:44:00.096447 kernel: Run /init as init process May 10 10:44:00.096462 kernel: with arguments: May 10 10:44:00.096472 kernel: /init May 10 10:44:00.096482 kernel: with environment: May 10 10:44:00.096492 kernel: HOME=/ May 10 10:44:00.096502 kernel: TERM=linux May 10 10:44:00.096512 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 10:44:00.096524 systemd[1]: Successfully made /usr/ read-only. May 10 10:44:00.096539 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 10 10:44:00.096555 systemd[1]: Detected virtualization kvm. May 10 10:44:00.096566 systemd[1]: Detected architecture x86-64. May 10 10:44:00.096577 systemd[1]: Running in initrd. May 10 10:44:00.096587 systemd[1]: No hostname configured, using default hostname. May 10 10:44:00.096598 systemd[1]: Hostname set to . May 10 10:44:00.096609 systemd[1]: Initializing machine ID from VM UUID. May 10 10:44:00.096620 systemd[1]: Queued start job for default target initrd.target. May 10 10:44:00.096633 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 10:44:00.096655 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 10:44:00.096669 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 10 10:44:00.096681 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 10:44:00.096707 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 10 10:44:00.096723 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 10 10:44:00.096735 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 10 10:44:00.096747 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 10 10:44:00.096758 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 10:44:00.096769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 10:44:00.096780 systemd[1]: Reached target paths.target - Path Units. May 10 10:44:00.096791 systemd[1]: Reached target slices.target - Slice Units. May 10 10:44:00.096803 systemd[1]: Reached target swap.target - Swaps. May 10 10:44:00.096816 systemd[1]: Reached target timers.target - Timer Units. May 10 10:44:00.096827 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 10 10:44:00.096839 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 10:44:00.096850 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 10:44:00.096861 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 10 10:44:00.096872 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 10:44:00.096884 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 10:44:00.096895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 10:44:00.096906 systemd[1]: Reached target sockets.target - Socket Units. May 10 10:44:00.096920 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 10 10:44:00.096931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 10:44:00.096942 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 10 10:44:00.096954 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 10 10:44:00.096965 systemd[1]: Starting systemd-fsck-usr.service... May 10 10:44:00.096977 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 10:44:00.096988 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 10:44:00.096999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 10:44:00.097014 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 10 10:44:00.097026 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 10:44:00.097059 systemd-journald[187]: Collecting audit messages is disabled. May 10 10:44:00.097092 systemd[1]: Finished systemd-fsck-usr.service. May 10 10:44:00.097104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 10:44:00.097119 systemd-journald[187]: Journal started May 10 10:44:00.097144 systemd-journald[187]: Runtime Journal (/run/log/journal/732b66c745b84afa8332c1bf3956111b) is 8M, max 78.5M, 70.5M free. May 10 10:44:00.101372 systemd[1]: Started systemd-journald.service - Journal Service. May 10 10:44:00.103336 systemd-modules-load[188]: Inserted module 'overlay' May 10 10:44:00.119950 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 10:44:00.161457 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 10:44:00.161491 kernel: Bridge firewalling registered May 10 10:44:00.136518 systemd-modules-load[188]: Inserted module 'br_netfilter' May 10 10:44:00.165141 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 10:44:00.166996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 10:44:00.168499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 10:44:00.169532 systemd-tmpfiles[201]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 10 10:44:00.173175 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 10:44:00.177312 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 10:44:00.181799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 10:44:00.187220 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 10:44:00.197748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 10:44:00.198942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 10:44:00.207100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 10:44:00.207791 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 10:44:00.209817 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 10 10:44:00.231709 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 10:44:00.258168 systemd-resolved[226]: Positive Trust Anchors: May 10 10:44:00.258183 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 10:44:00.258229 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 10:44:00.266556 systemd-resolved[226]: Defaulting to hostname 'linux'. May 10 10:44:00.267574 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 10:44:00.268418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 10:44:00.308761 kernel: SCSI subsystem initialized May 10 10:44:00.319729 kernel: Loading iSCSI transport class v2.0-870. May 10 10:44:00.332148 kernel: iscsi: registered transport (tcp) May 10 10:44:00.354791 kernel: iscsi: registered transport (qla4xxx) May 10 10:44:00.354897 kernel: QLogic iSCSI HBA Driver May 10 10:44:00.376569 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 10:44:00.400671 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 10:44:00.403056 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 10:44:00.455059 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 10 10:44:00.457828 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 10 10:44:00.508766 kernel: raid6: sse2x4 gen() 12703 MB/s May 10 10:44:00.526750 kernel: raid6: sse2x2 gen() 11985 MB/s May 10 10:44:00.545219 kernel: raid6: sse2x1 gen() 9392 MB/s May 10 10:44:00.545300 kernel: raid6: using algorithm sse2x4 gen() 12703 MB/s May 10 10:44:00.564283 kernel: raid6: .... xor() 4336 MB/s, rmw enabled May 10 10:44:00.564339 kernel: raid6: using ssse3x2 recovery algorithm May 10 10:44:00.617768 kernel: xor: measuring software checksum speed May 10 10:44:00.617849 kernel: prefetch64-sse : 7419 MB/sec May 10 10:44:00.617871 kernel: generic_sse : 6752 MB/sec May 10 10:44:00.620887 kernel: xor: using function: prefetch64-sse (7419 MB/sec) May 10 10:44:00.805770 kernel: Btrfs loaded, zoned=no, fsverity=no May 10 10:44:00.816664 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 10 10:44:00.819385 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 10:44:00.871789 systemd-udevd[436]: Using default interface naming scheme 'v255'. May 10 10:44:00.884818 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 10:44:00.890128 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 10 10:44:00.918111 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation May 10 10:44:00.943985 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 10 10:44:00.946196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 10:44:01.002247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 10:44:01.009981 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 10 10:44:01.087709 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 10 10:44:01.104850 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 10 10:44:01.120567 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 10:44:01.120632 kernel: GPT:17805311 != 20971519 May 10 10:44:01.120646 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 10:44:01.120659 kernel: GPT:17805311 != 20971519 May 10 10:44:01.120671 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 10:44:01.120684 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 10:44:01.122716 kernel: libata version 3.00 loaded. May 10 10:44:01.128734 kernel: ata_piix 0000:00:01.1: version 2.13 May 10 10:44:01.134753 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 10 10:44:01.137137 kernel: scsi host0: ata_piix May 10 10:44:01.137299 kernel: scsi host1: ata_piix May 10 10:44:01.141059 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 10 10:44:01.141084 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 10 10:44:01.153824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 10:44:01.153980 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 10:44:01.155341 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 10 10:44:01.157380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 10:44:01.160085 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 10:44:01.188718 kernel: BTRFS: device fsid 447a9416-2d70-470c-8858-df3b82fa5271 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (489) May 10 10:44:01.200724 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (492) May 10 10:44:01.220015 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 10 10:44:01.247144 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 10:44:01.266859 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 10 10:44:01.276265 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 10 10:44:01.276877 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 10 10:44:01.288450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 10:44:01.290917 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 10 10:44:01.308226 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 10 10:44:01.310001 disk-uuid[531]: Primary Header is updated. May 10 10:44:01.310001 disk-uuid[531]: Secondary Entries is updated. May 10 10:44:01.310001 disk-uuid[531]: Secondary Header is updated. May 10 10:44:01.325457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 10:44:01.311300 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 10 10:44:01.311945 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 10:44:01.312427 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 10:44:01.315916 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 10 10:44:01.367810 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 10 10:44:02.349795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 10:44:02.352471 disk-uuid[536]: The operation has completed successfully. May 10 10:44:02.428823 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 10:44:02.429074 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 10 10:44:02.479145 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 10 10:44:02.494488 sh[556]: Success May 10 10:44:02.514947 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 10:44:02.515011 kernel: device-mapper: uevent: version 1.0.3 May 10 10:44:02.518793 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 10 10:44:02.540763 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 10 10:44:02.632209 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 10 10:44:02.636854 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 10 10:44:02.644890 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 10 10:44:02.683388 kernel: BTRFS info (device dm-0): first mount of filesystem 447a9416-2d70-470c-8858-df3b82fa5271 May 10 10:44:02.683472 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 10 10:44:02.686946 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 10 10:44:02.692728 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 10 10:44:02.692790 kernel: BTRFS info (device dm-0): using free space tree May 10 10:44:02.710351 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 10 10:44:02.713928 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 10 10:44:02.716787 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 10 10:44:02.719597 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 10 10:44:02.724913 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 10 10:44:02.758153 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 10:44:02.758273 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 10:44:02.760076 kernel: BTRFS info (device vda6): using free space tree May 10 10:44:02.768748 kernel: BTRFS info (device vda6): auto enabling async discard May 10 10:44:02.775763 kernel: BTRFS info (device vda6): last unmount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 10:44:02.784866 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 10 10:44:02.786806 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 10 10:44:02.913836 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 10:44:02.917808 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 10:44:02.961485 systemd-networkd[739]: lo: Link UP May 10 10:44:02.962208 systemd-networkd[739]: lo: Gained carrier May 10 10:44:02.963369 systemd-networkd[739]: Enumeration completed May 10 10:44:02.963444 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 10:44:02.964020 systemd[1]: Reached target network.target - Network. May 10 10:44:02.966932 systemd-networkd[739]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 10:44:02.966939 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 10:44:02.967839 systemd-networkd[739]: eth0: Link UP May 10 10:44:02.967842 systemd-networkd[739]: eth0: Gained carrier May 10 10:44:02.967851 systemd-networkd[739]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 10:44:02.978785 systemd-networkd[739]: eth0: DHCPv4 address 172.24.4.188/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 10 10:44:03.007262 ignition[641]: Ignition 2.21.0 May 10 10:44:03.007994 ignition[641]: Stage: fetch-offline May 10 10:44:03.008027 ignition[641]: no configs at "/usr/lib/ignition/base.d" May 10 10:44:03.009553 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 10 10:44:03.008037 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 10:44:03.008111 ignition[641]: parsed url from cmdline: "" May 10 10:44:03.008116 ignition[641]: no config URL provided May 10 10:44:03.008121 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" May 10 10:44:03.008130 ignition[641]: no config at "/usr/lib/ignition/user.ign" May 10 10:44:03.012822 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 10 10:44:03.008135 ignition[641]: failed to fetch config: resource requires networking May 10 10:44:03.008272 ignition[641]: Ignition finished successfully May 10 10:44:03.034197 ignition[748]: Ignition 2.21.0 May 10 10:44:03.034761 ignition[748]: Stage: fetch May 10 10:44:03.034964 ignition[748]: no configs at "/usr/lib/ignition/base.d" May 10 10:44:03.034977 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 10:44:03.035067 ignition[748]: parsed url from cmdline: "" May 10 10:44:03.035072 ignition[748]: no config URL provided May 10 10:44:03.035078 ignition[748]: reading system config file "/usr/lib/ignition/user.ign" May 10 10:44:03.035086 ignition[748]: no config at "/usr/lib/ignition/user.ign" May 10 10:44:03.035180 ignition[748]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 10 10:44:03.036110 ignition[748]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 10 10:44:03.036126 ignition[748]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 10 10:44:03.366778 ignition[748]: GET result: OK May 10 10:44:03.367108 ignition[748]: parsing config with SHA512: 31484f2a4ae21086dae20ee31fbbc9d1b94c54aed21dfdfef11cd4c038e2ab479d00cb5d557311be243541d413510b5fc389f560620e14b4bcb89ba48b6d4c2d May 10 10:44:03.385270 unknown[748]: fetched base config from "system" May 10 10:44:03.385295 unknown[748]: fetched base config from "system" May 10 10:44:03.386210 ignition[748]: fetch: fetch complete May 10 10:44:03.385308 unknown[748]: fetched user config from "openstack" May 10 10:44:03.386224 ignition[748]: fetch: fetch passed May 10 10:44:03.390962 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 10 10:44:03.386314 ignition[748]: Ignition finished successfully May 10 10:44:03.396506 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 10 10:44:03.449279 ignition[754]: Ignition 2.21.0 May 10 10:44:03.449316 ignition[754]: Stage: kargs May 10 10:44:03.449680 ignition[754]: no configs at "/usr/lib/ignition/base.d" May 10 10:44:03.449761 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 10:44:03.454812 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 10 10:44:03.451727 ignition[754]: kargs: kargs passed May 10 10:44:03.451828 ignition[754]: Ignition finished successfully May 10 10:44:03.461774 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 10 10:44:03.504891 ignition[760]: Ignition 2.21.0 May 10 10:44:03.504925 ignition[760]: Stage: disks May 10 10:44:03.505268 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 10 10:44:03.505297 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 10:44:03.507482 ignition[760]: disks: disks passed May 10 10:44:03.510069 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 10 10:44:03.507576 ignition[760]: Ignition finished successfully May 10 10:44:03.513006 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 10 10:44:03.514991 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 10:44:03.517743 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 10:44:03.520102 systemd[1]: Reached target sysinit.target - System Initialization. May 10 10:44:03.523083 systemd[1]: Reached target basic.target - Basic System. May 10 10:44:03.529993 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 10 10:44:03.574358 systemd-fsck[768]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 10 10:44:03.592811 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 10 10:44:03.597912 systemd[1]: Mounting sysroot.mount - /sysroot... May 10 10:44:03.816498 kernel: EXT4-fs (vda9): mounted filesystem f8cce592-76ea-4219-9560-1ef21b28761f r/w with ordered data mode. Quota mode: none. May 10 10:44:03.823492 systemd[1]: Mounted sysroot.mount - /sysroot. May 10 10:44:03.827280 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 10 10:44:03.832033 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 10:44:03.838871 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 10 10:44:03.843919 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 10 10:44:03.848985 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 10 10:44:03.854247 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 10:44:03.856453 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 10 10:44:03.863556 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 10 10:44:03.876514 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (776) May 10 10:44:03.876583 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 10:44:03.890936 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 10 10:44:03.901941 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 10:44:03.901994 kernel: BTRFS info (device vda6): using free space tree May 10 10:44:03.909790 kernel: BTRFS info (device vda6): auto enabling async discard May 10 10:44:03.924115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 10:44:04.027967 initrd-setup-root[804]: cut: /sysroot/etc/passwd: No such file or directory May 10 10:44:04.036056 initrd-setup-root[812]: cut: /sysroot/etc/group: No such file or directory May 10 10:44:04.045180 initrd-setup-root[819]: cut: /sysroot/etc/shadow: No such file or directory May 10 10:44:04.052576 initrd-setup-root[826]: cut: /sysroot/etc/gshadow: No such file or directory May 10 10:44:04.146520 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 10 10:44:04.148333 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 10 10:44:04.150433 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 10 10:44:04.161569 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 10 10:44:04.164061 kernel: BTRFS info (device vda6): last unmount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 10:44:04.200227 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 10 10:44:04.215343 ignition[895]: INFO : Ignition 2.21.0 May 10 10:44:04.215343 ignition[895]: INFO : Stage: mount May 10 10:44:04.219356 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 10:44:04.219356 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 10:44:04.219356 ignition[895]: INFO : mount: mount passed May 10 10:44:04.219356 ignition[895]: INFO : Ignition finished successfully May 10 10:44:04.220603 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 10 10:44:04.798975 systemd-networkd[739]: eth0: Gained IPv6LL May 10 10:44:11.100063 coreos-metadata[778]: May 10 10:44:11.099 WARN failed to locate config-drive, using the metadata service API instead May 10 10:44:11.146260 coreos-metadata[778]: May 10 10:44:11.146 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 10 10:44:11.161675 coreos-metadata[778]: May 10 10:44:11.161 INFO Fetch successful May 10 10:44:11.163111 coreos-metadata[778]: May 10 10:44:11.161 INFO wrote hostname ci-4330-0-0-n-4c6f505fd4.novalocal to /sysroot/etc/hostname May 10 10:44:11.166311 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 10 10:44:11.166515 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 10 10:44:11.173791 systemd[1]: Starting ignition-files.service - Ignition (files)... May 10 10:44:11.204206 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 10:44:11.233813 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (910) May 10 10:44:11.240892 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 10:44:11.240970 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 10:44:11.244898 kernel: BTRFS info (device vda6): using free space tree May 10 10:44:11.255787 kernel: BTRFS info (device vda6): auto enabling async discard May 10 10:44:11.262146 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 10:44:11.307838 ignition[928]: INFO : Ignition 2.21.0 May 10 10:44:11.307838 ignition[928]: INFO : Stage: files May 10 10:44:11.310903 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 10:44:11.310903 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 10:44:11.310903 ignition[928]: DEBUG : files: compiled without relabeling support, skipping May 10 10:44:11.316200 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 10:44:11.316200 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 10:44:11.320339 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 10:44:11.320339 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 10:44:11.320339 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 10:44:11.318587 unknown[928]: wrote ssh authorized keys file for user: core May 10 10:44:11.327682 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 10 10:44:11.327682 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 10 10:44:11.407905 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 10:44:11.947679 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 10 10:44:11.947679 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 10:44:11.952627 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 10:44:12.658250 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 10:44:13.130857 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 10:44:13.130857 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 10:44:13.135947 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 10 10:44:13.622846 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 10 10:44:15.336156 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 10:44:15.336156 ignition[928]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 10 10:44:15.339494 ignition[928]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 10:44:15.339494 ignition[928]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 10:44:15.339494 ignition[928]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 10 10:44:15.339494 ignition[928]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 10 10:44:15.350523 ignition[928]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 10 10:44:15.350523 ignition[928]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 10:44:15.350523 ignition[928]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 10:44:15.350523 ignition[928]: INFO : files: files passed May 10 10:44:15.350523 ignition[928]: INFO : Ignition finished successfully May 10 10:44:15.341931 systemd[1]: Finished ignition-files.service - Ignition (files). May 10 10:44:15.346847 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 10 10:44:15.351953 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 10 10:44:15.366044 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 10:44:15.366788 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 10 10:44:15.376447 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 10:44:15.376447 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 10 10:44:15.383417 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 10:44:15.385764 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 10:44:15.388721 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 10 10:44:15.391817 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 10 10:44:15.442501 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 10:44:15.444061 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 10 10:44:15.446272 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 10 10:44:15.447633 systemd[1]: Reached target initrd.target - Initrd Default Target. May 10 10:44:15.450559 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 10 10:44:15.452426 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 10 10:44:15.476794 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 10:44:15.483045 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 10 10:44:15.521100 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 10 10:44:15.522794 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 10:44:15.525448 systemd[1]: Stopped target timers.target - Timer Units. May 10 10:44:15.528453 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 10:44:15.528850 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 10:44:15.531841 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 10 10:44:15.533641 systemd[1]: Stopped target basic.target - Basic System. May 10 10:44:15.536507 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 10 10:44:15.539065 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 10 10:44:15.541607 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 10 10:44:15.544577 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 10 10:44:15.547617 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 10 10:44:15.550504 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 10 10:44:15.553502 systemd[1]: Stopped target sysinit.target - System Initialization. May 10 10:44:15.556403 systemd[1]: Stopped target local-fs.target - Local File Systems. May 10 10:44:15.559452 systemd[1]: Stopped target swap.target - Swaps. May 10 10:44:15.562087 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 10:44:15.562363 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 10 10:44:15.565404 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 10 10:44:15.567473 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 10:44:15.569892 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 10 10:44:15.570139 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 10:44:15.572811 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 10:44:15.573085 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 10 10:44:15.577076 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 10:44:15.577387 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 10:44:15.579208 systemd[1]: ignition-files.service: Deactivated successfully. May 10 10:44:15.579581 systemd[1]: Stopped ignition-files.service - Ignition (files). May 10 10:44:15.585143 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 10 10:44:15.592794 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 10 10:44:15.594142 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 10:44:15.595987 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 10 10:44:15.598496 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 10:44:15.599915 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 10 10:44:15.610828 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 10:44:15.610938 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 10 10:44:15.624325 ignition[982]: INFO : Ignition 2.21.0 May 10 10:44:15.624325 ignition[982]: INFO : Stage: umount May 10 10:44:15.625426 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 10:44:15.625426 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 10:44:15.625426 ignition[982]: INFO : umount: umount passed May 10 10:44:15.625426 ignition[982]: INFO : Ignition finished successfully May 10 10:44:15.627306 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 10:44:15.629297 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 10 10:44:15.630347 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 10:44:15.630394 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 10 10:44:15.631163 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 10:44:15.631203 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 10 10:44:15.632126 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 10:44:15.632165 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 10 10:44:15.633093 systemd[1]: Stopped target network.target - Network. May 10 10:44:15.634642 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 10:44:15.634717 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 10 10:44:15.637056 systemd[1]: Stopped target paths.target - Path Units. May 10 10:44:15.637947 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 10:44:15.641767 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 10:44:15.642301 systemd[1]: Stopped target slices.target - Slice Units. May 10 10:44:15.644501 systemd[1]: Stopped target sockets.target - Socket Units. May 10 10:44:15.645445 systemd[1]: iscsid.socket: Deactivated successfully. May 10 10:44:15.645482 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 10 10:44:15.647999 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 10:44:15.648030 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 10:44:15.649159 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 10:44:15.649210 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 10 10:44:15.650335 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 10 10:44:15.650379 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 10 10:44:15.652130 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 10 10:44:15.655488 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 10 10:44:15.657908 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 10:44:15.658567 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 10:44:15.658658 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 10 10:44:15.659639 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 10:44:15.659738 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 10 10:44:15.662362 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 10 10:44:15.662571 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 10:44:15.662661 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 10 10:44:15.669578 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 10 10:44:15.670813 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 10 10:44:15.671381 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 10:44:15.671423 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 10 10:44:15.672555 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 10:44:15.672619 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 10 10:44:15.674788 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 10 10:44:15.675909 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 10:44:15.675954 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 10:44:15.678054 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 10:44:15.678096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 10:44:15.679798 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 10:44:15.679840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 10 10:44:15.680591 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 10 10:44:15.680630 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 10:44:15.681792 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 10:44:15.683702 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 10:44:15.683762 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 10 10:44:15.693992 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 10:44:15.694672 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 10:44:15.696238 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 10:44:15.696326 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 10 10:44:15.698368 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 10:44:15.698430 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 10 10:44:15.700309 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 10:44:15.700343 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 10 10:44:15.700866 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 10:44:15.700915 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 10 10:44:15.701499 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 10:44:15.701543 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 10 10:44:15.702627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 10:44:15.702673 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 10:44:15.705849 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 10 10:44:15.706826 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 10 10:44:15.706884 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 10 10:44:15.710052 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 10:44:15.710110 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 10:44:15.711026 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 10 10:44:15.711072 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 10:44:15.713838 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 10:44:15.713884 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 10 10:44:15.714661 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 10:44:15.715572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 10:44:15.717349 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 10 10:44:15.717406 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 10 10:44:15.717449 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 10 10:44:15.717495 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 10:44:15.717987 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 10:44:15.718073 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 10 10:44:15.719361 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 10 10:44:15.721853 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 10 10:44:15.735616 systemd[1]: Switching root. May 10 10:44:15.769365 systemd-journald[187]: Journal stopped May 10 10:44:17.867760 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). May 10 10:44:17.867835 kernel: SELinux: policy capability network_peer_controls=1 May 10 10:44:17.867856 kernel: SELinux: policy capability open_perms=1 May 10 10:44:17.867869 kernel: SELinux: policy capability extended_socket_class=1 May 10 10:44:17.867882 kernel: SELinux: policy capability always_check_network=0 May 10 10:44:17.867895 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 10:44:17.867907 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 10:44:17.867919 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 10:44:17.867936 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 10:44:17.867951 kernel: audit: type=1403 audit(1746873856.691:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 10:44:17.867965 systemd[1]: Successfully loaded SELinux policy in 78.373ms. May 10 10:44:17.868001 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.953ms. May 10 10:44:17.868016 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 10 10:44:17.868030 systemd[1]: Detected virtualization kvm. May 10 10:44:17.868044 systemd[1]: Detected architecture x86-64. May 10 10:44:17.868058 systemd[1]: Detected first boot. May 10 10:44:17.868071 systemd[1]: Hostname set to . May 10 10:44:17.868087 systemd[1]: Initializing machine ID from VM UUID. May 10 10:44:17.868100 zram_generator::config[1025]: No configuration found. May 10 10:44:17.868115 kernel: Guest personality initialized and is inactive May 10 10:44:17.868127 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 10 10:44:17.868139 kernel: Initialized host personality May 10 10:44:17.868151 kernel: NET: Registered PF_VSOCK protocol family May 10 10:44:17.868163 systemd[1]: Populated /etc with preset unit settings. May 10 10:44:17.868177 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 10 10:44:17.868191 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 10:44:17.868206 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 10 10:44:17.868219 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 10:44:17.868233 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 10:44:17.868251 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 10:44:17.868266 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 10:44:17.868279 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 10:44:17.868293 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 10:44:17.868306 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 10:44:17.868321 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 10:44:17.868335 systemd[1]: Created slice user.slice - User and Session Slice. May 10 10:44:17.868348 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 10:44:17.868361 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 10:44:17.868375 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 10:44:17.868388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 10:44:17.868402 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 10:44:17.868419 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 10:44:17.868432 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 10 10:44:17.868445 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 10:44:17.868459 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 10:44:17.868472 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 10 10:44:17.868485 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 10 10:44:17.868498 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 10 10:44:17.868511 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 10:44:17.868526 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 10:44:17.868540 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 10:44:17.868553 systemd[1]: Reached target slices.target - Slice Units. May 10 10:44:17.868568 systemd[1]: Reached target swap.target - Swaps. May 10 10:44:17.868581 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 10:44:17.868593 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 10:44:17.868607 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 10 10:44:17.868619 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 10:44:17.868632 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 10:44:17.868644 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 10:44:17.868660 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 10:44:17.868673 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 10:44:17.868686 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 10:44:17.868713 systemd[1]: Mounting media.mount - External Media Directory... May 10 10:44:17.868727 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 10:44:17.868739 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 10:44:17.868752 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 10:44:17.868764 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 10:44:17.868780 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 10:44:17.868793 systemd[1]: Reached target machines.target - Containers. May 10 10:44:17.868806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 10:44:17.868818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 10:44:17.868831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 10:44:17.868843 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 10:44:17.868856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 10:44:17.868868 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 10:44:17.868880 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 10:44:17.868895 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 10:44:17.868908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 10:44:17.868921 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 10:44:17.868934 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 10:44:17.868946 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 10 10:44:17.868958 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 10:44:17.868971 systemd[1]: Stopped systemd-fsck-usr.service. May 10 10:44:17.868985 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 10:44:17.868999 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 10:44:17.869014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 10:44:17.869027 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 10:44:17.869039 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 10:44:17.869051 kernel: loop: module loaded May 10 10:44:17.869063 kernel: fuse: init (API version 7.39) May 10 10:44:17.869077 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 10 10:44:17.869090 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 10:44:17.869103 systemd[1]: verity-setup.service: Deactivated successfully. May 10 10:44:17.869116 systemd[1]: Stopped verity-setup.service. May 10 10:44:17.869129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 10:44:17.869144 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 10:44:17.869158 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 10:44:17.869170 systemd[1]: Mounted media.mount - External Media Directory. May 10 10:44:17.869183 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 10:44:17.869195 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 10:44:17.869207 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 10:44:17.869220 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 10:44:17.869233 kernel: ACPI: bus type drm_connector registered May 10 10:44:17.869247 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 10:44:17.869260 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 10:44:17.869272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 10:44:17.869285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 10:44:17.869297 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 10:44:17.869311 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 10:44:17.869323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 10:44:17.869356 systemd-journald[1108]: Collecting audit messages is disabled. May 10 10:44:17.869386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 10:44:17.869400 systemd-journald[1108]: Journal started May 10 10:44:17.869426 systemd-journald[1108]: Runtime Journal (/run/log/journal/732b66c745b84afa8332c1bf3956111b) is 8M, max 78.5M, 70.5M free. May 10 10:44:17.484161 systemd[1]: Queued start job for default target multi-user.target. May 10 10:44:17.495866 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 10 10:44:17.496260 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 10:44:17.873769 systemd[1]: Started systemd-journald.service - Journal Service. May 10 10:44:17.875528 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 10:44:17.875757 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 10:44:17.876517 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 10:44:17.876686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 10:44:17.878529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 10:44:17.879388 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 10:44:17.881113 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 10:44:17.881987 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 10 10:44:17.898512 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 10:44:17.902853 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 10:44:17.906467 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 10:44:17.907380 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 10:44:17.907423 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 10:44:17.909179 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 10 10:44:17.923945 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 10:44:17.924709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 10:44:17.925982 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 10:44:17.928841 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 10:44:17.930935 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 10:44:17.937870 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 10:44:17.938503 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 10:44:17.943003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 10:44:17.949067 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 10:44:17.957824 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 10:44:17.962009 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 10:44:17.965064 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 10:44:17.966088 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 10:44:17.982604 systemd-journald[1108]: Time spent on flushing to /var/log/journal/732b66c745b84afa8332c1bf3956111b is 52.361ms for 971 entries. May 10 10:44:17.982604 systemd-journald[1108]: System Journal (/var/log/journal/732b66c745b84afa8332c1bf3956111b) is 8M, max 584.8M, 576.8M free. May 10 10:44:18.136522 systemd-journald[1108]: Received client request to flush runtime journal. May 10 10:44:18.136614 kernel: loop0: detected capacity change from 0 to 146240 May 10 10:44:17.993305 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 10:44:18.007974 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 10:44:18.010227 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 10:44:18.022788 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 10 10:44:18.024413 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 10:44:18.049904 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. May 10 10:44:18.049922 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. May 10 10:44:18.065121 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 10:44:18.068919 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 10:44:18.140018 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 10:44:18.153776 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 10:44:18.167375 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 10 10:44:18.180745 kernel: loop1: detected capacity change from 0 to 113872 May 10 10:44:18.202456 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 10:44:18.208945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 10:44:18.249917 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 10 10:44:18.249943 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 10 10:44:18.264374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 10:44:18.271766 kernel: loop2: detected capacity change from 0 to 218376 May 10 10:44:18.347917 kernel: loop3: detected capacity change from 0 to 8 May 10 10:44:18.368797 kernel: loop4: detected capacity change from 0 to 146240 May 10 10:44:18.433779 kernel: loop5: detected capacity change from 0 to 113872 May 10 10:44:18.497935 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 10:44:18.514437 kernel: loop6: detected capacity change from 0 to 218376 May 10 10:44:18.607306 kernel: loop7: detected capacity change from 0 to 8 May 10 10:44:18.605095 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 10 10:44:18.605591 (sd-merge)[1191]: Merged extensions into '/usr'. May 10 10:44:18.618349 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... May 10 10:44:18.618376 systemd[1]: Reloading... May 10 10:44:18.707746 zram_generator::config[1214]: No configuration found. May 10 10:44:18.967532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 10:44:19.078906 systemd[1]: Reloading finished in 459 ms. May 10 10:44:19.093665 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 10:44:19.094725 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 10:44:19.103150 systemd[1]: Starting ensure-sysext.service... May 10 10:44:19.108631 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 10:44:19.112865 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 10:44:19.144467 systemd[1]: Reload requested from client PID 1274 ('systemctl') (unit ensure-sysext.service)... May 10 10:44:19.144494 systemd[1]: Reloading... May 10 10:44:19.170487 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 10 10:44:19.170527 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 10 10:44:19.170929 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 10:44:19.171232 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 10:44:19.172104 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 10:44:19.172418 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 10 10:44:19.172479 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 10 10:44:19.179813 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. May 10 10:44:19.179825 systemd-tmpfiles[1275]: Skipping /boot May 10 10:44:19.196536 systemd-udevd[1276]: Using default interface naming scheme 'v255'. May 10 10:44:19.196978 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. May 10 10:44:19.196992 systemd-tmpfiles[1275]: Skipping /boot May 10 10:44:19.232265 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 10:44:19.255190 zram_generator::config[1300]: No configuration found. May 10 10:44:19.486026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 10:44:19.568732 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1346) May 10 10:44:19.629721 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 10 10:44:19.633729 kernel: mousedev: PS/2 mouse device common for all mice May 10 10:44:19.635445 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 10 10:44:19.635538 systemd[1]: Reloading finished in 490 ms. May 10 10:44:19.637740 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 10 10:44:19.644926 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 10:44:19.646723 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 10:44:19.659542 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 10:44:19.660710 kernel: ACPI: button: Power Button [PWRF] May 10 10:44:19.711256 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 10:44:19.714126 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 10:44:19.718110 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 10:44:19.719919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 10:44:19.721668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 10:44:19.727013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 10:44:19.734160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 10:44:19.735873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 10:44:19.736049 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 10:44:19.739374 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 10:44:19.745861 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 10:44:19.752834 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 10:44:19.756978 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 10:44:19.757529 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 10:44:19.764339 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 10:44:19.764543 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 10:44:19.765791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 10:44:19.765920 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 10:44:19.773087 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 10:44:19.774765 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 10:44:19.782282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 10:44:19.782554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 10:44:19.792143 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 10:44:19.793328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 10:44:19.793465 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 10:44:19.793658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 10:44:19.804992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 10:44:19.805184 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 10:44:19.812902 systemd[1]: Finished ensure-sysext.service. May 10 10:44:19.826300 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 10 10:44:19.827439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 10:44:19.827760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 10:44:19.830937 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 10:44:19.840056 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 10:44:19.840281 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 10:44:19.846024 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 10:44:19.846777 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 10:44:19.848520 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 10:44:19.848606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 10:44:19.854202 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 10:44:19.859917 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 10:44:19.886467 augenrules[1450]: No rules May 10 10:44:19.887513 systemd[1]: audit-rules.service: Deactivated successfully. May 10 10:44:19.888373 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 10:44:19.895324 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 10:44:19.903665 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 10:44:19.926275 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 10:44:19.933725 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 10 10:44:19.934792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 10:44:19.941833 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 10 10:44:19.954418 kernel: Console: switching to colour dummy device 80x25 May 10 10:44:19.954495 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 10 10:44:19.954532 kernel: [drm] features: -context_init May 10 10:44:19.951359 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 10:44:19.955164 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 10:44:19.956719 kernel: [drm] number of scanouts: 1 May 10 10:44:19.956765 kernel: [drm] number of cap sets: 0 May 10 10:44:19.959745 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 10 10:44:19.971703 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 10 10:44:19.971785 kernel: Console: switching to colour frame buffer device 160x50 May 10 10:44:19.972217 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 10:44:19.983738 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 10 10:44:19.992820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 10:44:19.993053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 10:44:19.998380 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 10:44:20.002202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 10:44:20.049340 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 10:44:20.127052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 10:44:20.161392 systemd-resolved[1423]: Positive Trust Anchors: May 10 10:44:20.161739 systemd-resolved[1423]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 10:44:20.161839 systemd-resolved[1423]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 10:44:20.169068 systemd-resolved[1423]: Using system hostname 'ci-4330-0-0-n-4c6f505fd4.novalocal'. May 10 10:44:20.170571 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 10:44:20.170774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 10:44:20.180372 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 10 10:44:20.181890 systemd[1]: Reached target sysinit.target - System Initialization. May 10 10:44:20.182172 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 10:44:20.182950 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 10:44:20.183046 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 10 10:44:20.183115 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 10:44:20.183189 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 10:44:20.183215 systemd[1]: Reached target paths.target - Path Units. May 10 10:44:20.183274 systemd[1]: Reached target time-set.target - System Time Set. May 10 10:44:20.183456 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 10:44:20.183579 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 10:44:20.183661 systemd[1]: Reached target timers.target - Timer Units. May 10 10:44:20.187441 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 10:44:20.188286 systemd-networkd[1420]: lo: Link UP May 10 10:44:20.188290 systemd-networkd[1420]: lo: Gained carrier May 10 10:44:20.188931 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 10:44:20.192285 systemd-networkd[1420]: Enumeration completed May 10 10:44:20.193179 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 10 10:44:20.194329 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 10:44:20.194416 systemd-networkd[1420]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 10:44:20.195353 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 10 10:44:20.195444 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 10 10:44:20.196656 systemd-networkd[1420]: eth0: Link UP May 10 10:44:20.196744 systemd-networkd[1420]: eth0: Gained carrier May 10 10:44:20.196806 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 10:44:20.198069 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 10:44:20.200222 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 10 10:44:20.203254 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 10:44:20.205318 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 10:44:20.207861 systemd[1]: Reached target network.target - Network. May 10 10:44:20.209368 systemd[1]: Reached target sockets.target - Socket Units. May 10 10:44:20.211463 systemd[1]: Reached target basic.target - Basic System. May 10 10:44:20.212649 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 10:44:20.212776 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 10:44:20.215770 systemd[1]: Starting containerd.service - containerd container runtime... May 10 10:44:20.217771 systemd-networkd[1420]: eth0: DHCPv4 address 172.24.4.188/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 10 10:44:20.218775 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 10 10:44:20.219316 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 10 10:44:20.222396 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 10:44:20.233835 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 10:44:20.238186 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 10:44:20.243602 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 10:44:20.244298 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 10:44:20.250061 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 10 10:44:20.251304 jq[1488]: false May 10 10:44:20.257889 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 10:44:20.267161 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 10:44:20.274883 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 10:44:20.280908 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 10:44:20.282001 oslogin_cache_refresh[1492]: Refreshing passwd entry cache May 10 10:44:20.284104 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing passwd entry cache May 10 10:44:20.287995 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 10:44:20.293914 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting users, quitting May 10 10:44:20.295971 oslogin_cache_refresh[1492]: Failure getting users, quitting May 10 10:44:20.297924 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 10 10:44:20.297924 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing group entry cache May 10 10:44:20.297837 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 10 10:44:20.296016 oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 10 10:44:20.296066 oslogin_cache_refresh[1492]: Refreshing group entry cache May 10 10:44:20.304401 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 10:44:20.308425 extend-filesystems[1491]: Found loop4 May 10 10:44:20.308438 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 10:44:20.311063 extend-filesystems[1491]: Found loop5 May 10 10:44:20.311063 extend-filesystems[1491]: Found loop6 May 10 10:44:20.311063 extend-filesystems[1491]: Found loop7 May 10 10:44:20.311063 extend-filesystems[1491]: Found vda May 10 10:44:20.311063 extend-filesystems[1491]: Found vda1 May 10 10:44:20.311063 extend-filesystems[1491]: Found vda2 May 10 10:44:20.311063 extend-filesystems[1491]: Found vda3 May 10 10:44:20.311063 extend-filesystems[1491]: Found usr May 10 10:44:20.311063 extend-filesystems[1491]: Found vda4 May 10 10:44:20.311063 extend-filesystems[1491]: Found vda6 May 10 10:44:20.311063 extend-filesystems[1491]: Found vda7 May 10 10:44:20.311063 extend-filesystems[1491]: Found vda9 May 10 10:44:20.311063 extend-filesystems[1491]: Checking size of /dev/vda9 May 10 10:44:20.381486 extend-filesystems[1491]: Resized partition /dev/vda9 May 10 10:44:20.311075 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 10:44:20.314985 oslogin_cache_refresh[1492]: Failure getting groups, quitting May 10 10:44:20.397573 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting groups, quitting May 10 10:44:20.397573 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 10 10:44:20.397630 extend-filesystems[1517]: resize2fs 1.47.2 (1-Jan-2025) May 10 10:44:20.313914 systemd[1]: Starting update-engine.service - Update Engine... May 10 10:44:20.314997 oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 10 10:44:20.331831 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 10:44:20.398595 update_engine[1504]: I20250510 10:44:20.385449 1504 main.cc:92] Flatcar Update Engine starting May 10 10:44:20.468223 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 10 10:44:20.468260 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 10 10:44:20.468278 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1329) May 10 10:44:20.359567 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 10:44:20.366113 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 10:44:20.468580 jq[1506]: true May 10 10:44:20.368335 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 10:44:20.468820 jq[1519]: true May 10 10:44:20.371927 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 10 10:44:20.372119 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 10 10:44:20.379464 systemd[1]: motdgen.service: Deactivated successfully. May 10 10:44:20.379639 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 10:44:20.385275 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 10:44:20.385486 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 10:44:20.441046 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 10:44:20.475330 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 10:44:20.485381 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 10:44:20.485381 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 May 10 10:44:20.485381 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 10 10:44:20.475527 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 10:44:20.517656 extend-filesystems[1491]: Resized filesystem in /dev/vda9 May 10 10:44:20.521456 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 10 10:44:20.535036 tar[1518]: linux-amd64/LICENSE May 10 10:44:20.535036 tar[1518]: linux-amd64/helm May 10 10:44:20.566260 systemd-logind[1498]: New seat seat0. May 10 10:44:20.571666 systemd-logind[1498]: Watching system buttons on /dev/input/event2 (Power Button) May 10 10:44:20.571686 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 10:44:20.571882 systemd[1]: Started systemd-logind.service - User Login Management. May 10 10:44:20.593651 dbus-daemon[1486]: [system] SELinux support is enabled May 10 10:44:20.598264 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 10:44:20.604576 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 10:44:20.605032 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 10:44:20.606153 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 10:44:20.609403 update_engine[1504]: I20250510 10:44:20.607194 1504 update_check_scheduler.cc:74] Next update check in 8m46s May 10 10:44:20.606175 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 10:44:20.606639 systemd[1]: Started update-engine.service - Update Engine. May 10 10:44:20.611981 dbus-daemon[1486]: [system] Successfully activated service 'org.freedesktop.systemd1' May 10 10:44:20.614265 bash[1550]: Updated "/home/core/.ssh/authorized_keys" May 10 10:44:20.622948 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 10:44:20.626780 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 10:44:20.641593 systemd[1]: Starting sshkeys.service... May 10 10:44:20.681654 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 10 10:44:20.694029 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 10 10:44:20.841752 locksmithd[1551]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 10:44:20.851856 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 10:44:20.957514 containerd[1527]: time="2025-05-10T10:44:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 10 10:44:20.960205 containerd[1527]: time="2025-05-10T10:44:20.960045389Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.982641178Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.321µs" May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.982741947Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.982769508Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.982942012Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.982961579Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.982989361Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.983056146Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.983071124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.983313679Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.983333436Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.983346340Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 10 10:44:20.983704 containerd[1527]: time="2025-05-10T10:44:20.983357140Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 10 10:44:20.984129 containerd[1527]: time="2025-05-10T10:44:20.983446749Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 10 10:44:20.984129 containerd[1527]: time="2025-05-10T10:44:20.983668394Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 10 10:44:20.984129 containerd[1527]: time="2025-05-10T10:44:20.983724730Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 10 10:44:20.984129 containerd[1527]: time="2025-05-10T10:44:20.983744848Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 10 10:44:20.984129 containerd[1527]: time="2025-05-10T10:44:20.983776267Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 10 10:44:20.984129 containerd[1527]: time="2025-05-10T10:44:20.984016557Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 10 10:44:20.984129 containerd[1527]: time="2025-05-10T10:44:20.984078373Z" level=info msg="metadata content store policy set" policy=shared May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991160886Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991220889Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991237460Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991250565Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991262868Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991273678Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991285580Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991297663Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991310247Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991321207Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991330895Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 10 10:44:20.991740 containerd[1527]: time="2025-05-10T10:44:20.991343399Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.991927444Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.991954084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.991969473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.991980914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.991991795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.992003607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.992015058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.992025408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.992173666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.992194004Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.992214232Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.992312015Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.992328576Z" level=info msg="Start snapshots syncer" May 10 10:44:20.993977 containerd[1527]: time="2025-05-10T10:44:20.992355647Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 10 10:44:20.994279 containerd[1527]: time="2025-05-10T10:44:20.992839565Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 10 10:44:20.994279 containerd[1527]: time="2025-05-10T10:44:20.992930946Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993229967Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993366713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993407760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993425854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993445942Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993465399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993480086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993497088Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993529389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993564134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993581937Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993619748Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993640277Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 10 10:44:20.994405 containerd[1527]: time="2025-05-10T10:44:20.993655004Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 10 10:44:20.996820 containerd[1527]: time="2025-05-10T10:44:20.993671766Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 10 10:44:20.996820 containerd[1527]: time="2025-05-10T10:44:20.993682937Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 10 10:44:20.996820 containerd[1527]: time="2025-05-10T10:44:20.993719205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 10 10:44:20.996926 containerd[1527]: time="2025-05-10T10:44:20.993738160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 10 10:44:20.997315 containerd[1527]: time="2025-05-10T10:44:20.996933563Z" level=info msg="runtime interface created" May 10 10:44:20.997315 containerd[1527]: time="2025-05-10T10:44:20.997012932Z" level=info msg="created NRI interface" May 10 10:44:20.997315 containerd[1527]: time="2025-05-10T10:44:20.997093663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 10 10:44:20.997315 containerd[1527]: time="2025-05-10T10:44:20.997127998Z" level=info msg="Connect containerd service" May 10 10:44:20.997315 containerd[1527]: time="2025-05-10T10:44:20.997178553Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 10:44:20.998640 containerd[1527]: time="2025-05-10T10:44:20.998612302Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 10:44:21.256282 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 10:44:21.290737 containerd[1527]: time="2025-05-10T10:44:21.290298539Z" level=info msg="Start subscribing containerd event" May 10 10:44:21.290737 containerd[1527]: time="2025-05-10T10:44:21.290371696Z" level=info msg="Start recovering state" May 10 10:44:21.290737 containerd[1527]: time="2025-05-10T10:44:21.290538068Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 10:44:21.291455 containerd[1527]: time="2025-05-10T10:44:21.291437645Z" level=info msg="Start event monitor" May 10 10:44:21.291567 containerd[1527]: time="2025-05-10T10:44:21.291550928Z" level=info msg="Start cni network conf syncer for default" May 10 10:44:21.291657 containerd[1527]: time="2025-05-10T10:44:21.291641698Z" level=info msg="Start streaming server" May 10 10:44:21.293220 containerd[1527]: time="2025-05-10T10:44:21.291774327Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 10 10:44:21.293220 containerd[1527]: time="2025-05-10T10:44:21.291788243Z" level=info msg="runtime interface starting up..." May 10 10:44:21.293220 containerd[1527]: time="2025-05-10T10:44:21.291795236Z" level=info msg="starting plugins..." May 10 10:44:21.293220 containerd[1527]: time="2025-05-10T10:44:21.291812649Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 10 10:44:21.293220 containerd[1527]: time="2025-05-10T10:44:21.291493881Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 10:44:21.293220 containerd[1527]: time="2025-05-10T10:44:21.291988969Z" level=info msg="containerd successfully booted in 0.334820s" May 10 10:44:21.292127 systemd[1]: Started containerd.service - containerd container runtime. May 10 10:44:21.321085 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 10:44:21.327355 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 10:44:21.333820 systemd[1]: Started sshd@0-172.24.4.188:22-172.24.4.1:58458.service - OpenSSH per-connection server daemon (172.24.4.1:58458). May 10 10:44:21.358829 systemd[1]: issuegen.service: Deactivated successfully. May 10 10:44:21.362993 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 10:44:21.373133 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 10:44:21.400273 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 10:44:21.411578 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 10:44:21.425115 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 10 10:44:21.426264 systemd[1]: Reached target getty.target - Login Prompts. May 10 10:44:21.502923 systemd-networkd[1420]: eth0: Gained IPv6LL May 10 10:44:21.504347 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 10 10:44:21.506805 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 10:44:21.507549 tar[1518]: linux-amd64/README.md May 10 10:44:21.513440 systemd[1]: Reached target network-online.target - Network is Online. May 10 10:44:21.522893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:44:21.528586 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 10:44:21.539811 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 10:44:21.561015 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 10:44:22.437108 sshd[1593]: Accepted publickey for core from 172.24.4.1 port 58458 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:44:22.440657 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:44:22.460482 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 10:44:22.472200 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 10:44:22.500309 systemd-logind[1498]: New session 1 of user core. May 10 10:44:22.511650 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 10:44:22.519289 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 10:44:22.535520 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 10:44:22.543155 systemd-logind[1498]: New session c1 of user core. May 10 10:44:22.711558 systemd[1620]: Queued start job for default target default.target. May 10 10:44:22.721924 systemd[1620]: Created slice app.slice - User Application Slice. May 10 10:44:22.722120 systemd[1620]: Reached target paths.target - Paths. May 10 10:44:22.722342 systemd[1620]: Reached target timers.target - Timers. May 10 10:44:22.724874 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 10:44:22.752192 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 10:44:22.752539 systemd[1620]: Reached target sockets.target - Sockets. May 10 10:44:22.752718 systemd[1620]: Reached target basic.target - Basic System. May 10 10:44:22.752793 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 10:44:22.753006 systemd[1620]: Reached target default.target - Main User Target. May 10 10:44:22.753032 systemd[1620]: Startup finished in 200ms. May 10 10:44:22.759935 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 10:44:23.172841 systemd[1]: Started sshd@1-172.24.4.188:22-172.24.4.1:58468.service - OpenSSH per-connection server daemon (172.24.4.1:58468). May 10 10:44:23.465473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:44:23.481258 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 10:44:24.656233 kubelet[1637]: E0510 10:44:24.656106 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 10:44:24.659886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 10:44:24.660265 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 10:44:24.661212 systemd[1]: kubelet.service: Consumed 1.908s CPU time, 254.8M memory peak. May 10 10:44:25.294054 sshd[1631]: Accepted publickey for core from 172.24.4.1 port 58468 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:44:25.296806 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:44:25.310798 systemd-logind[1498]: New session 2 of user core. May 10 10:44:25.327185 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 10:44:25.937737 sshd[1647]: Connection closed by 172.24.4.1 port 58468 May 10 10:44:25.937911 sshd-session[1631]: pam_unix(sshd:session): session closed for user core May 10 10:44:25.952210 systemd[1]: sshd@1-172.24.4.188:22-172.24.4.1:58468.service: Deactivated successfully. May 10 10:44:25.956405 systemd[1]: session-2.scope: Deactivated successfully. May 10 10:44:25.958601 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. May 10 10:44:25.963388 systemd[1]: Started sshd@2-172.24.4.188:22-172.24.4.1:57434.service - OpenSSH per-connection server daemon (172.24.4.1:57434). May 10 10:44:25.970575 systemd-logind[1498]: Removed session 2. May 10 10:44:26.492255 login[1599]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 10 10:44:26.505189 systemd-logind[1498]: New session 3 of user core. May 10 10:44:26.512139 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 10:44:26.515225 login[1600]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 10 10:44:26.527853 systemd-logind[1498]: New session 4 of user core. May 10 10:44:26.533962 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 10:44:27.321898 coreos-metadata[1485]: May 10 10:44:27.321 WARN failed to locate config-drive, using the metadata service API instead May 10 10:44:27.377599 sshd[1652]: Accepted publickey for core from 172.24.4.1 port 57434 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:44:27.378484 coreos-metadata[1485]: May 10 10:44:27.378 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 10 10:44:27.381367 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:44:27.391936 systemd-logind[1498]: New session 5 of user core. May 10 10:44:27.402096 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 10:44:27.492168 coreos-metadata[1485]: May 10 10:44:27.492 INFO Fetch successful May 10 10:44:27.492168 coreos-metadata[1485]: May 10 10:44:27.492 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 10 10:44:27.503197 coreos-metadata[1485]: May 10 10:44:27.503 INFO Fetch successful May 10 10:44:27.503197 coreos-metadata[1485]: May 10 10:44:27.503 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 10 10:44:27.518239 coreos-metadata[1485]: May 10 10:44:27.518 INFO Fetch successful May 10 10:44:27.518239 coreos-metadata[1485]: May 10 10:44:27.518 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 10 10:44:27.529414 coreos-metadata[1485]: May 10 10:44:27.529 INFO Fetch successful May 10 10:44:27.529414 coreos-metadata[1485]: May 10 10:44:27.529 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 10 10:44:27.540258 coreos-metadata[1485]: May 10 10:44:27.540 INFO Fetch successful May 10 10:44:27.540258 coreos-metadata[1485]: May 10 10:44:27.540 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 10 10:44:27.554072 coreos-metadata[1485]: May 10 10:44:27.554 INFO Fetch successful May 10 10:44:27.613184 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 10 10:44:27.614624 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 10:44:27.799123 coreos-metadata[1554]: May 10 10:44:27.799 WARN failed to locate config-drive, using the metadata service API instead May 10 10:44:27.842446 coreos-metadata[1554]: May 10 10:44:27.842 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 10 10:44:27.854916 coreos-metadata[1554]: May 10 10:44:27.854 INFO Fetch successful May 10 10:44:27.854916 coreos-metadata[1554]: May 10 10:44:27.854 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 10 10:44:27.868294 coreos-metadata[1554]: May 10 10:44:27.868 INFO Fetch successful May 10 10:44:27.877890 unknown[1554]: wrote ssh authorized keys file for user: core May 10 10:44:27.939855 update-ssh-keys[1691]: Updated "/home/core/.ssh/authorized_keys" May 10 10:44:27.941991 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 10 10:44:27.946460 systemd[1]: Finished sshkeys.service. May 10 10:44:27.952497 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 10:44:27.953357 systemd[1]: Startup finished in 3.811s (kernel) + 16.863s (initrd) + 11.338s (userspace) = 32.013s. May 10 10:44:27.968839 sshd[1681]: Connection closed by 172.24.4.1 port 57434 May 10 10:44:27.969799 sshd-session[1652]: pam_unix(sshd:session): session closed for user core May 10 10:44:27.975313 systemd[1]: sshd@2-172.24.4.188:22-172.24.4.1:57434.service: Deactivated successfully. May 10 10:44:27.978956 systemd[1]: session-5.scope: Deactivated successfully. May 10 10:44:27.982409 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. May 10 10:44:27.985003 systemd-logind[1498]: Removed session 5. May 10 10:44:34.911656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 10:44:34.917181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:44:35.319688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:44:35.325080 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 10:44:35.707593 kubelet[1704]: E0510 10:44:35.707486 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 10:44:35.711478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 10:44:35.711635 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 10:44:35.711958 systemd[1]: kubelet.service: Consumed 315ms CPU time, 101.2M memory peak. May 10 10:44:37.990334 systemd[1]: Started sshd@3-172.24.4.188:22-172.24.4.1:57964.service - OpenSSH per-connection server daemon (172.24.4.1:57964). May 10 10:44:39.473094 sshd[1713]: Accepted publickey for core from 172.24.4.1 port 57964 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:44:39.474776 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:44:39.484333 systemd-logind[1498]: New session 6 of user core. May 10 10:44:39.496021 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 10:44:40.182285 sshd[1715]: Connection closed by 172.24.4.1 port 57964 May 10 10:44:40.183466 sshd-session[1713]: pam_unix(sshd:session): session closed for user core May 10 10:44:40.205248 systemd[1]: sshd@3-172.24.4.188:22-172.24.4.1:57964.service: Deactivated successfully. May 10 10:44:40.208497 systemd[1]: session-6.scope: Deactivated successfully. May 10 10:44:40.210800 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. May 10 10:44:40.215083 systemd[1]: Started sshd@4-172.24.4.188:22-172.24.4.1:57974.service - OpenSSH per-connection server daemon (172.24.4.1:57974). May 10 10:44:40.218006 systemd-logind[1498]: Removed session 6. May 10 10:44:41.431829 sshd[1720]: Accepted publickey for core from 172.24.4.1 port 57974 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:44:41.434846 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:44:41.446451 systemd-logind[1498]: New session 7 of user core. May 10 10:44:41.458063 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 10:44:42.075759 sshd[1723]: Connection closed by 172.24.4.1 port 57974 May 10 10:44:42.077054 sshd-session[1720]: pam_unix(sshd:session): session closed for user core May 10 10:44:42.097631 systemd[1]: sshd@4-172.24.4.188:22-172.24.4.1:57974.service: Deactivated successfully. May 10 10:44:42.101557 systemd[1]: session-7.scope: Deactivated successfully. May 10 10:44:42.105219 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. May 10 10:44:42.108650 systemd[1]: Started sshd@5-172.24.4.188:22-172.24.4.1:57982.service - OpenSSH per-connection server daemon (172.24.4.1:57982). May 10 10:44:42.112394 systemd-logind[1498]: Removed session 7. May 10 10:44:43.375911 sshd[1728]: Accepted publickey for core from 172.24.4.1 port 57982 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:44:43.378682 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:44:43.391799 systemd-logind[1498]: New session 8 of user core. May 10 10:44:43.399497 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 10:44:43.979237 sshd[1731]: Connection closed by 172.24.4.1 port 57982 May 10 10:44:43.976935 sshd-session[1728]: pam_unix(sshd:session): session closed for user core May 10 10:44:43.999683 systemd[1]: sshd@5-172.24.4.188:22-172.24.4.1:57982.service: Deactivated successfully. May 10 10:44:44.003291 systemd[1]: session-8.scope: Deactivated successfully. May 10 10:44:44.005436 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. May 10 10:44:44.010130 systemd[1]: Started sshd@6-172.24.4.188:22-172.24.4.1:52364.service - OpenSSH per-connection server daemon (172.24.4.1:52364). May 10 10:44:44.014161 systemd-logind[1498]: Removed session 8. May 10 10:44:45.234182 sshd[1736]: Accepted publickey for core from 172.24.4.1 port 52364 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:44:45.237273 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:44:45.250169 systemd-logind[1498]: New session 9 of user core. May 10 10:44:45.263096 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 10:44:45.704531 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 10:44:45.705456 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 10:44:45.729660 sudo[1740]: pam_unix(sudo:session): session closed for user root May 10 10:44:45.863328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 10:44:45.867534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:44:45.948748 sshd[1739]: Connection closed by 172.24.4.1 port 52364 May 10 10:44:45.948751 sshd-session[1736]: pam_unix(sshd:session): session closed for user core May 10 10:44:45.969087 systemd[1]: sshd@6-172.24.4.188:22-172.24.4.1:52364.service: Deactivated successfully. May 10 10:44:45.972312 systemd[1]: session-9.scope: Deactivated successfully. May 10 10:44:45.976025 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. May 10 10:44:45.978554 systemd[1]: Started sshd@7-172.24.4.188:22-172.24.4.1:52372.service - OpenSSH per-connection server daemon (172.24.4.1:52372). May 10 10:44:45.982617 systemd-logind[1498]: Removed session 9. May 10 10:44:46.320111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:44:46.333130 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 10:44:46.684900 kubelet[1755]: E0510 10:44:46.684821 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 10:44:46.688322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 10:44:46.688611 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 10:44:46.689504 systemd[1]: kubelet.service: Consumed 268ms CPU time, 103.7M memory peak. May 10 10:44:47.260908 sshd[1748]: Accepted publickey for core from 172.24.4.1 port 52372 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:44:47.263775 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:44:47.277824 systemd-logind[1498]: New session 10 of user core. May 10 10:44:47.287166 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 10:44:47.672797 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 10:44:47.673101 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 10:44:47.678183 sudo[1764]: pam_unix(sudo:session): session closed for user root May 10 10:44:47.689834 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 10 10:44:47.690469 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 10:44:47.708923 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 10:44:47.788932 augenrules[1786]: No rules May 10 10:44:47.792006 systemd[1]: audit-rules.service: Deactivated successfully. May 10 10:44:47.792567 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 10:44:47.795236 sudo[1763]: pam_unix(sudo:session): session closed for user root May 10 10:44:47.984637 sshd[1762]: Connection closed by 172.24.4.1 port 52372 May 10 10:44:47.986948 sshd-session[1748]: pam_unix(sshd:session): session closed for user core May 10 10:44:47.996569 systemd[1]: sshd@7-172.24.4.188:22-172.24.4.1:52372.service: Deactivated successfully. May 10 10:44:47.999388 systemd[1]: session-10.scope: Deactivated successfully. May 10 10:44:48.002213 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. May 10 10:44:48.005266 systemd[1]: Started sshd@8-172.24.4.188:22-172.24.4.1:52382.service - OpenSSH per-connection server daemon (172.24.4.1:52382). May 10 10:44:48.008526 systemd-logind[1498]: Removed session 10. May 10 10:44:49.319410 sshd[1794]: Accepted publickey for core from 172.24.4.1 port 52382 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:44:49.322022 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:44:49.333319 systemd-logind[1498]: New session 11 of user core. May 10 10:44:49.345256 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 10:44:49.712891 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 10:44:49.713638 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 10:44:50.890204 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 10:44:50.908353 (dockerd)[1817]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 10:44:51.505857 dockerd[1817]: time="2025-05-10T10:44:51.505308422Z" level=info msg="Starting up" May 10 10:44:51.508974 dockerd[1817]: time="2025-05-10T10:44:51.508844204Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 10 10:44:51.602814 dockerd[1817]: time="2025-05-10T10:44:51.602679452Z" level=info msg="Loading containers: start." May 10 10:44:51.630477 kernel: Initializing XFRM netlink socket May 10 10:44:51.709388 systemd-timesyncd[1436]: Contacted time server 50.205.57.38:123 (2.flatcar.pool.ntp.org). May 10 10:44:51.709470 systemd-timesyncd[1436]: Initial clock synchronization to Sat 2025-05-10 10:44:51.797228 UTC. May 10 10:44:52.051152 systemd-networkd[1420]: docker0: Link UP May 10 10:44:52.060033 dockerd[1817]: time="2025-05-10T10:44:52.059946875Z" level=info msg="Loading containers: done." May 10 10:44:52.085290 dockerd[1817]: time="2025-05-10T10:44:52.085212655Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 10:44:52.085528 dockerd[1817]: time="2025-05-10T10:44:52.085376108Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 10 10:44:52.085586 dockerd[1817]: time="2025-05-10T10:44:52.085566100Z" level=info msg="Initializing buildkit" May 10 10:44:52.144605 dockerd[1817]: time="2025-05-10T10:44:52.144455979Z" level=info msg="Completed buildkit initialization" May 10 10:44:52.171148 dockerd[1817]: time="2025-05-10T10:44:52.171016388Z" level=info msg="Daemon has completed initialization" May 10 10:44:52.171848 dockerd[1817]: time="2025-05-10T10:44:52.171396046Z" level=info msg="API listen on /run/docker.sock" May 10 10:44:52.171570 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 10:44:53.847037 containerd[1527]: time="2025-05-10T10:44:53.846855884Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 10 10:44:54.746463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908814394.mount: Deactivated successfully. May 10 10:44:56.569972 containerd[1527]: time="2025-05-10T10:44:56.569921437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:44:56.571675 containerd[1527]: time="2025-05-10T10:44:56.571632688Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" May 10 10:44:56.576075 containerd[1527]: time="2025-05-10T10:44:56.576043771Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:44:56.578681 containerd[1527]: time="2025-05-10T10:44:56.578657592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:44:56.579625 containerd[1527]: time="2025-05-10T10:44:56.579584552Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.732659546s" May 10 10:44:56.579672 containerd[1527]: time="2025-05-10T10:44:56.579628496Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 10 10:44:56.580439 containerd[1527]: time="2025-05-10T10:44:56.580420913Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 10 10:44:56.863324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 10:44:56.868079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:44:57.030391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:44:57.040073 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 10:44:57.087857 kubelet[2080]: E0510 10:44:57.087717 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 10:44:57.092027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 10:44:57.092346 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 10:44:57.093352 systemd[1]: kubelet.service: Consumed 190ms CPU time, 101.9M memory peak. May 10 10:44:59.431585 containerd[1527]: time="2025-05-10T10:44:59.431524823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:44:59.433156 containerd[1527]: time="2025-05-10T10:44:59.432938820Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" May 10 10:44:59.434589 containerd[1527]: time="2025-05-10T10:44:59.434531309Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:44:59.437507 containerd[1527]: time="2025-05-10T10:44:59.437464311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:44:59.439408 containerd[1527]: time="2025-05-10T10:44:59.438801746Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.85823427s" May 10 10:44:59.439408 containerd[1527]: time="2025-05-10T10:44:59.438847196Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 10 10:44:59.439482 containerd[1527]: time="2025-05-10T10:44:59.439423332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 10 10:45:01.250873 containerd[1527]: time="2025-05-10T10:45:01.250824679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:01.252786 containerd[1527]: time="2025-05-10T10:45:01.252335939Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" May 10 10:45:01.257644 containerd[1527]: time="2025-05-10T10:45:01.257615617Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:01.260610 containerd[1527]: time="2025-05-10T10:45:01.260568347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:01.261751 containerd[1527]: time="2025-05-10T10:45:01.261629522Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.822181586s" May 10 10:45:01.261751 containerd[1527]: time="2025-05-10T10:45:01.261659647Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 10 10:45:01.262453 containerd[1527]: time="2025-05-10T10:45:01.262413453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 10 10:45:02.781661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4019813175.mount: Deactivated successfully. May 10 10:45:03.443447 containerd[1527]: time="2025-05-10T10:45:03.443388339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:03.444938 containerd[1527]: time="2025-05-10T10:45:03.444908647Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" May 10 10:45:03.446730 containerd[1527]: time="2025-05-10T10:45:03.446638566Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:03.448814 containerd[1527]: time="2025-05-10T10:45:03.448758059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:03.449438 containerd[1527]: time="2025-05-10T10:45:03.449388201Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.186935009s" May 10 10:45:03.449490 containerd[1527]: time="2025-05-10T10:45:03.449439275Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 10 10:45:03.450463 containerd[1527]: time="2025-05-10T10:45:03.450214494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 10 10:45:04.117167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397556913.mount: Deactivated successfully. May 10 10:45:05.953868 update_engine[1504]: I20250510 10:45:05.953797 1504 update_attempter.cc:509] Updating boot flags... May 10 10:45:06.006749 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2162) May 10 10:45:06.084725 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2165) May 10 10:45:06.159661 containerd[1527]: time="2025-05-10T10:45:06.159614584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:06.161490 containerd[1527]: time="2025-05-10T10:45:06.161458541Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 10 10:45:06.166731 containerd[1527]: time="2025-05-10T10:45:06.166336141Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:06.178099 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2165) May 10 10:45:06.178435 containerd[1527]: time="2025-05-10T10:45:06.178402804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:06.180072 containerd[1527]: time="2025-05-10T10:45:06.180028151Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.729776435s" May 10 10:45:06.180129 containerd[1527]: time="2025-05-10T10:45:06.180073061Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 10 10:45:06.182092 containerd[1527]: time="2025-05-10T10:45:06.182063984Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 10 10:45:06.798218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038274506.mount: Deactivated successfully. May 10 10:45:06.810972 containerd[1527]: time="2025-05-10T10:45:06.810756241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 10:45:06.812444 containerd[1527]: time="2025-05-10T10:45:06.812345811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 10 10:45:06.814236 containerd[1527]: time="2025-05-10T10:45:06.814112744Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 10:45:06.819247 containerd[1527]: time="2025-05-10T10:45:06.819113202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 10:45:06.821765 containerd[1527]: time="2025-05-10T10:45:06.821419318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 639.294849ms" May 10 10:45:06.821765 containerd[1527]: time="2025-05-10T10:45:06.821517839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 10 10:45:06.823494 containerd[1527]: time="2025-05-10T10:45:06.823253421Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 10 10:45:07.113808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 10:45:07.117605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:45:07.304245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:45:07.312938 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 10:45:07.543438 kubelet[2182]: E0510 10:45:07.374839 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 10:45:07.378592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 10:45:07.378934 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 10:45:07.379811 systemd[1]: kubelet.service: Consumed 215ms CPU time, 104M memory peak. May 10 10:45:08.397468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702136127.mount: Deactivated successfully. May 10 10:45:12.379863 containerd[1527]: time="2025-05-10T10:45:12.379712465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:12.381582 containerd[1527]: time="2025-05-10T10:45:12.381304020Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 10 10:45:12.382787 containerd[1527]: time="2025-05-10T10:45:12.382756788Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:12.386533 containerd[1527]: time="2025-05-10T10:45:12.386480030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:12.389351 containerd[1527]: time="2025-05-10T10:45:12.387675230Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.564309808s" May 10 10:45:12.389351 containerd[1527]: time="2025-05-10T10:45:12.387728551Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 10 10:45:15.887007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:45:15.887178 systemd[1]: kubelet.service: Consumed 215ms CPU time, 104M memory peak. May 10 10:45:15.889184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:45:15.940828 systemd[1]: Reload requested from client PID 2271 ('systemctl') (unit session-11.scope)... May 10 10:45:15.940861 systemd[1]: Reloading... May 10 10:45:16.054436 zram_generator::config[2316]: No configuration found. May 10 10:45:16.187535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 10:45:16.336468 systemd[1]: Reloading finished in 394 ms. May 10 10:45:16.386367 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 10:45:16.386443 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 10:45:16.386841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:45:16.386879 systemd[1]: kubelet.service: Consumed 128ms CPU time, 91.8M memory peak. May 10 10:45:16.389090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:45:16.521471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:45:16.532109 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 10:45:16.615485 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 10:45:16.615485 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 10 10:45:16.615485 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 10:45:16.616228 kubelet[2382]: I0510 10:45:16.615566 2382 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 10:45:17.046230 kubelet[2382]: I0510 10:45:17.045652 2382 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 10 10:45:17.046230 kubelet[2382]: I0510 10:45:17.045711 2382 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 10:45:17.046230 kubelet[2382]: I0510 10:45:17.046118 2382 server.go:954] "Client rotation is on, will bootstrap in background" May 10 10:45:17.082960 kubelet[2382]: E0510 10:45:17.082891 2382 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" May 10 10:45:17.084106 kubelet[2382]: I0510 10:45:17.083921 2382 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 10:45:17.096595 kubelet[2382]: I0510 10:45:17.096476 2382 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 10 10:45:17.100208 kubelet[2382]: I0510 10:45:17.099537 2382 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 10:45:17.100208 kubelet[2382]: I0510 10:45:17.099737 2382 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 10:45:17.100208 kubelet[2382]: I0510 10:45:17.099763 2382 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4330-0-0-n-4c6f505fd4.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 10:45:17.100208 kubelet[2382]: I0510 10:45:17.099936 2382 topology_manager.go:138] "Creating topology manager with none policy" May 10 10:45:17.100457 kubelet[2382]: I0510 10:45:17.099946 2382 container_manager_linux.go:304] "Creating device plugin manager" May 10 10:45:17.100457 kubelet[2382]: I0510 10:45:17.100050 2382 state_mem.go:36] "Initialized new in-memory state store" May 10 10:45:17.105845 kubelet[2382]: I0510 10:45:17.105734 2382 kubelet.go:446] "Attempting to sync node with API server" May 10 10:45:17.105845 kubelet[2382]: I0510 10:45:17.105757 2382 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 10:45:17.105845 kubelet[2382]: I0510 10:45:17.105786 2382 kubelet.go:352] "Adding apiserver pod source" May 10 10:45:17.108223 kubelet[2382]: I0510 10:45:17.108028 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 10:45:17.128836 kubelet[2382]: I0510 10:45:17.128789 2382 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 10 10:45:17.130299 kubelet[2382]: I0510 10:45:17.130024 2382 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 10:45:17.132140 kubelet[2382]: W0510 10:45:17.131740 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 10:45:17.136967 kubelet[2382]: I0510 10:45:17.136936 2382 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 10 10:45:17.137173 kubelet[2382]: I0510 10:45:17.137150 2382 server.go:1287] "Started kubelet" May 10 10:45:17.137688 kubelet[2382]: W0510 10:45:17.127113 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4330-0-0-n-4c6f505fd4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused May 10 10:45:17.137688 kubelet[2382]: E0510 10:45:17.137709 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4330-0-0-n-4c6f505fd4.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" May 10 10:45:17.138370 kubelet[2382]: W0510 10:45:17.138261 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused May 10 10:45:17.138370 kubelet[2382]: E0510 10:45:17.138321 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" May 10 10:45:17.140186 kubelet[2382]: I0510 10:45:17.140020 2382 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 10 10:45:17.143190 kubelet[2382]: I0510 10:45:17.142205 2382 server.go:490] "Adding debug handlers to kubelet server" May 10 10:45:17.144290 kubelet[2382]: I0510 10:45:17.144206 2382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 10:45:17.144840 kubelet[2382]: I0510 10:45:17.144823 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 10:45:17.144931 kubelet[2382]: I0510 10:45:17.144895 2382 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 10:45:17.145670 kubelet[2382]: I0510 10:45:17.145628 2382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 10:45:17.150547 kubelet[2382]: E0510 10:45:17.148784 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.188:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4330-0-0-n-4c6f505fd4.novalocal.183e2492825a31f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4330-0-0-n-4c6f505fd4.novalocal,UID:ci-4330-0-0-n-4c6f505fd4.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4330-0-0-n-4c6f505fd4.novalocal,},FirstTimestamp:2025-05-10 10:45:17.137105392 +0000 UTC m=+0.601217212,LastTimestamp:2025-05-10 10:45:17.137105392 +0000 UTC m=+0.601217212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4330-0-0-n-4c6f505fd4.novalocal,}" May 10 10:45:17.150972 kubelet[2382]: E0510 10:45:17.150934 2382 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" May 10 10:45:17.151431 kubelet[2382]: I0510 10:45:17.151417 2382 volume_manager.go:297] "Starting Kubelet Volume Manager" May 10 10:45:17.151684 kubelet[2382]: I0510 10:45:17.151670 2382 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 10:45:17.151820 kubelet[2382]: I0510 10:45:17.151808 2382 reconciler.go:26] "Reconciler: start to sync state" May 10 10:45:17.152248 kubelet[2382]: W0510 10:45:17.152218 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused May 10 10:45:17.152340 kubelet[2382]: E0510 10:45:17.152321 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" May 10 10:45:17.152819 kubelet[2382]: E0510 10:45:17.152796 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4330-0-0-n-4c6f505fd4.novalocal?timeout=10s\": dial tcp 172.24.4.188:6443: connect: connection refused" interval="200ms" May 10 10:45:17.153877 kubelet[2382]: I0510 10:45:17.153859 2382 factory.go:221] Registration of the systemd container factory successfully May 10 10:45:17.154046 kubelet[2382]: I0510 10:45:17.154027 2382 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 10:45:17.155042 kubelet[2382]: E0510 10:45:17.155026 2382 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 10:45:17.155380 kubelet[2382]: I0510 10:45:17.155366 2382 factory.go:221] Registration of the containerd container factory successfully May 10 10:45:17.181614 kubelet[2382]: I0510 10:45:17.181566 2382 cpu_manager.go:221] "Starting CPU manager" policy="none" May 10 10:45:17.181781 kubelet[2382]: I0510 10:45:17.181733 2382 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 10 10:45:17.181951 kubelet[2382]: I0510 10:45:17.181819 2382 state_mem.go:36] "Initialized new in-memory state store" May 10 10:45:17.183961 kubelet[2382]: I0510 10:45:17.183908 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 10:45:17.186205 kubelet[2382]: I0510 10:45:17.186171 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 10:45:17.186261 kubelet[2382]: I0510 10:45:17.186209 2382 status_manager.go:227] "Starting to sync pod status with apiserver" May 10 10:45:17.186261 kubelet[2382]: I0510 10:45:17.186244 2382 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 10 10:45:17.186261 kubelet[2382]: I0510 10:45:17.186256 2382 kubelet.go:2388] "Starting kubelet main sync loop" May 10 10:45:17.186368 kubelet[2382]: E0510 10:45:17.186331 2382 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 10:45:17.188516 kubelet[2382]: I0510 10:45:17.188401 2382 policy_none.go:49] "None policy: Start" May 10 10:45:17.188516 kubelet[2382]: I0510 10:45:17.188440 2382 memory_manager.go:186] "Starting memorymanager" policy="None" May 10 10:45:17.188516 kubelet[2382]: I0510 10:45:17.188457 2382 state_mem.go:35] "Initializing new in-memory state store" May 10 10:45:17.189131 kubelet[2382]: W0510 10:45:17.189090 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused May 10 10:45:17.189602 kubelet[2382]: E0510 10:45:17.189257 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" May 10 10:45:17.198898 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 10 10:45:17.206203 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 10 10:45:17.218206 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 10 10:45:17.220446 kubelet[2382]: I0510 10:45:17.220196 2382 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 10:45:17.220446 kubelet[2382]: I0510 10:45:17.220377 2382 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 10:45:17.220446 kubelet[2382]: I0510 10:45:17.220389 2382 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 10:45:17.220906 kubelet[2382]: I0510 10:45:17.220861 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 10:45:17.222563 kubelet[2382]: E0510 10:45:17.222538 2382 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 10 10:45:17.222628 kubelet[2382]: E0510 10:45:17.222580 2382 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" May 10 10:45:17.309324 systemd[1]: Created slice kubepods-burstable-pode9c4d39ffcac7412af67474b699f78e7.slice - libcontainer container kubepods-burstable-pode9c4d39ffcac7412af67474b699f78e7.slice. May 10 10:45:17.328768 kubelet[2382]: E0510 10:45:17.328103 2382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.328924 kubelet[2382]: I0510 10:45:17.328783 2382 kubelet_node_status.go:76] "Attempting to register node" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.330778 kubelet[2382]: E0510 10:45:17.330385 2382 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.332551 systemd[1]: Created slice kubepods-burstable-pod2de883a43c5509e43655a0908e464dd1.slice - libcontainer container kubepods-burstable-pod2de883a43c5509e43655a0908e464dd1.slice. May 10 10:45:17.349824 kubelet[2382]: E0510 10:45:17.349736 2382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.351747 systemd[1]: Created slice kubepods-burstable-podf4c4cd00f6dc44ad79352356595646c9.slice - libcontainer container kubepods-burstable-podf4c4cd00f6dc44ad79352356595646c9.slice. May 10 10:45:17.352833 kubelet[2382]: I0510 10:45:17.352398 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-ca-certs\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.352833 kubelet[2382]: I0510 10:45:17.352481 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-flexvolume-dir\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.352833 kubelet[2382]: I0510 10:45:17.352753 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-k8s-certs\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.352833 kubelet[2382]: I0510 10:45:17.352821 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4c4cd00f6dc44ad79352356595646c9-kubeconfig\") pod \"kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"f4c4cd00f6dc44ad79352356595646c9\") " pod="kube-system/kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.353134 kubelet[2382]: I0510 10:45:17.352869 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9c4d39ffcac7412af67474b699f78e7-ca-certs\") pod \"kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"e9c4d39ffcac7412af67474b699f78e7\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.353134 kubelet[2382]: I0510 10:45:17.352913 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9c4d39ffcac7412af67474b699f78e7-k8s-certs\") pod \"kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"e9c4d39ffcac7412af67474b699f78e7\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.353134 kubelet[2382]: I0510 10:45:17.352960 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9c4d39ffcac7412af67474b699f78e7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"e9c4d39ffcac7412af67474b699f78e7\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.353134 kubelet[2382]: I0510 10:45:17.353006 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-kubeconfig\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.353366 kubelet[2382]: I0510 10:45:17.353051 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.354414 kubelet[2382]: E0510 10:45:17.353959 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4330-0-0-n-4c6f505fd4.novalocal?timeout=10s\": dial tcp 172.24.4.188:6443: connect: connection refused" interval="400ms" May 10 10:45:17.356779 kubelet[2382]: E0510 10:45:17.356739 2382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.534203 kubelet[2382]: I0510 10:45:17.534086 2382 kubelet_node_status.go:76] "Attempting to register node" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.534985 kubelet[2382]: E0510 10:45:17.534855 2382 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.631648 containerd[1527]: time="2025-05-10T10:45:17.631053306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal,Uid:e9c4d39ffcac7412af67474b699f78e7,Namespace:kube-system,Attempt:0,}" May 10 10:45:17.652340 containerd[1527]: time="2025-05-10T10:45:17.652264076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal,Uid:2de883a43c5509e43655a0908e464dd1,Namespace:kube-system,Attempt:0,}" May 10 10:45:17.660480 containerd[1527]: time="2025-05-10T10:45:17.660197458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal,Uid:f4c4cd00f6dc44ad79352356595646c9,Namespace:kube-system,Attempt:0,}" May 10 10:45:17.726574 containerd[1527]: time="2025-05-10T10:45:17.726440937Z" level=info msg="connecting to shim ccad955ec49a4a7b38fad6edba168a37046277b6e139a2c0366e3a66bd16f9d7" address="unix:///run/containerd/s/129d4bc649c54bae696f1735d6bfb3a988dd0a3a82138ef71e46afa3cfdf0280" namespace=k8s.io protocol=ttrpc version=3 May 10 10:45:17.756602 kubelet[2382]: E0510 10:45:17.755625 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4330-0-0-n-4c6f505fd4.novalocal?timeout=10s\": dial tcp 172.24.4.188:6443: connect: connection refused" interval="800ms" May 10 10:45:17.758470 containerd[1527]: time="2025-05-10T10:45:17.758373036Z" level=info msg="connecting to shim dfb747c1887be96a7e97e2792fbc4b62c001618e480a83b99ab7e95bbae214d5" address="unix:///run/containerd/s/fc26d26ab7a25d034be6261052904d2d60df67e8b964f78e69481c4e8fc13a85" namespace=k8s.io protocol=ttrpc version=3 May 10 10:45:17.767094 containerd[1527]: time="2025-05-10T10:45:17.767051567Z" level=info msg="connecting to shim 39532e1131cda585cfc5625570a81a5ebd0c88ab1de0bead866e8002d79aeddd" address="unix:///run/containerd/s/af5a674089c48436e346acf0a7dc948a53773181c73a98328eaa110cafa40d31" namespace=k8s.io protocol=ttrpc version=3 May 10 10:45:17.788570 systemd[1]: Started cri-containerd-ccad955ec49a4a7b38fad6edba168a37046277b6e139a2c0366e3a66bd16f9d7.scope - libcontainer container ccad955ec49a4a7b38fad6edba168a37046277b6e139a2c0366e3a66bd16f9d7. May 10 10:45:17.810974 systemd[1]: Started cri-containerd-dfb747c1887be96a7e97e2792fbc4b62c001618e480a83b99ab7e95bbae214d5.scope - libcontainer container dfb747c1887be96a7e97e2792fbc4b62c001618e480a83b99ab7e95bbae214d5. May 10 10:45:17.822916 systemd[1]: Started cri-containerd-39532e1131cda585cfc5625570a81a5ebd0c88ab1de0bead866e8002d79aeddd.scope - libcontainer container 39532e1131cda585cfc5625570a81a5ebd0c88ab1de0bead866e8002d79aeddd. May 10 10:45:17.890637 containerd[1527]: time="2025-05-10T10:45:17.890531459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal,Uid:e9c4d39ffcac7412af67474b699f78e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccad955ec49a4a7b38fad6edba168a37046277b6e139a2c0366e3a66bd16f9d7\"" May 10 10:45:17.897366 containerd[1527]: time="2025-05-10T10:45:17.897328647Z" level=info msg="CreateContainer within sandbox \"ccad955ec49a4a7b38fad6edba168a37046277b6e139a2c0366e3a66bd16f9d7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 10:45:17.900541 containerd[1527]: time="2025-05-10T10:45:17.900500392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal,Uid:2de883a43c5509e43655a0908e464dd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfb747c1887be96a7e97e2792fbc4b62c001618e480a83b99ab7e95bbae214d5\"" May 10 10:45:17.915138 containerd[1527]: time="2025-05-10T10:45:17.914948443Z" level=info msg="Container 394e55c993cafa7829647ebe63ff019946bc9d99b7974b2a488a1202d88854e4: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:17.918744 containerd[1527]: time="2025-05-10T10:45:17.918154195Z" level=info msg="CreateContainer within sandbox \"dfb747c1887be96a7e97e2792fbc4b62c001618e480a83b99ab7e95bbae214d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 10:45:17.930306 containerd[1527]: time="2025-05-10T10:45:17.930270715Z" level=info msg="CreateContainer within sandbox \"ccad955ec49a4a7b38fad6edba168a37046277b6e139a2c0366e3a66bd16f9d7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"394e55c993cafa7829647ebe63ff019946bc9d99b7974b2a488a1202d88854e4\"" May 10 10:45:17.931285 containerd[1527]: time="2025-05-10T10:45:17.931238256Z" level=info msg="StartContainer for \"394e55c993cafa7829647ebe63ff019946bc9d99b7974b2a488a1202d88854e4\"" May 10 10:45:17.933396 containerd[1527]: time="2025-05-10T10:45:17.933369434Z" level=info msg="connecting to shim 394e55c993cafa7829647ebe63ff019946bc9d99b7974b2a488a1202d88854e4" address="unix:///run/containerd/s/129d4bc649c54bae696f1735d6bfb3a988dd0a3a82138ef71e46afa3cfdf0280" protocol=ttrpc version=3 May 10 10:45:17.938576 containerd[1527]: time="2025-05-10T10:45:17.938345647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal,Uid:f4c4cd00f6dc44ad79352356595646c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"39532e1131cda585cfc5625570a81a5ebd0c88ab1de0bead866e8002d79aeddd\"" May 10 10:45:17.939322 kubelet[2382]: I0510 10:45:17.939116 2382 kubelet_node_status.go:76] "Attempting to register node" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.939829 kubelet[2382]: E0510 10:45:17.939781 2382 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:17.944048 containerd[1527]: time="2025-05-10T10:45:17.944006491Z" level=info msg="CreateContainer within sandbox \"39532e1131cda585cfc5625570a81a5ebd0c88ab1de0bead866e8002d79aeddd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 10:45:17.946262 containerd[1527]: time="2025-05-10T10:45:17.946183002Z" level=info msg="Container d2307fbcf3de9bcfaab6ad32f98221c097d38cfff457fd5436484604e8e76490: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:17.960866 systemd[1]: Started cri-containerd-394e55c993cafa7829647ebe63ff019946bc9d99b7974b2a488a1202d88854e4.scope - libcontainer container 394e55c993cafa7829647ebe63ff019946bc9d99b7974b2a488a1202d88854e4. May 10 10:45:17.963275 containerd[1527]: time="2025-05-10T10:45:17.963164351Z" level=info msg="CreateContainer within sandbox \"dfb747c1887be96a7e97e2792fbc4b62c001618e480a83b99ab7e95bbae214d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d2307fbcf3de9bcfaab6ad32f98221c097d38cfff457fd5436484604e8e76490\"" May 10 10:45:17.964154 containerd[1527]: time="2025-05-10T10:45:17.964040276Z" level=info msg="StartContainer for \"d2307fbcf3de9bcfaab6ad32f98221c097d38cfff457fd5436484604e8e76490\"" May 10 10:45:17.965002 containerd[1527]: time="2025-05-10T10:45:17.964078152Z" level=info msg="Container bc937f218614adf9f25523a51318b76c2ce955ab070ac25e5d4b54f4adcf6bd4: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:17.967813 containerd[1527]: time="2025-05-10T10:45:17.967143195Z" level=info msg="connecting to shim d2307fbcf3de9bcfaab6ad32f98221c097d38cfff457fd5436484604e8e76490" address="unix:///run/containerd/s/fc26d26ab7a25d034be6261052904d2d60df67e8b964f78e69481c4e8fc13a85" protocol=ttrpc version=3 May 10 10:45:17.979259 containerd[1527]: time="2025-05-10T10:45:17.979220836Z" level=info msg="CreateContainer within sandbox \"39532e1131cda585cfc5625570a81a5ebd0c88ab1de0bead866e8002d79aeddd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bc937f218614adf9f25523a51318b76c2ce955ab070ac25e5d4b54f4adcf6bd4\"" May 10 10:45:17.981004 containerd[1527]: time="2025-05-10T10:45:17.979922857Z" level=info msg="StartContainer for \"bc937f218614adf9f25523a51318b76c2ce955ab070ac25e5d4b54f4adcf6bd4\"" May 10 10:45:17.981004 containerd[1527]: time="2025-05-10T10:45:17.980923945Z" level=info msg="connecting to shim bc937f218614adf9f25523a51318b76c2ce955ab070ac25e5d4b54f4adcf6bd4" address="unix:///run/containerd/s/af5a674089c48436e346acf0a7dc948a53773181c73a98328eaa110cafa40d31" protocol=ttrpc version=3 May 10 10:45:17.996535 systemd[1]: Started cri-containerd-d2307fbcf3de9bcfaab6ad32f98221c097d38cfff457fd5436484604e8e76490.scope - libcontainer container d2307fbcf3de9bcfaab6ad32f98221c097d38cfff457fd5436484604e8e76490. May 10 10:45:18.005855 systemd[1]: Started cri-containerd-bc937f218614adf9f25523a51318b76c2ce955ab070ac25e5d4b54f4adcf6bd4.scope - libcontainer container bc937f218614adf9f25523a51318b76c2ce955ab070ac25e5d4b54f4adcf6bd4. May 10 10:45:18.057440 containerd[1527]: time="2025-05-10T10:45:18.057399599Z" level=info msg="StartContainer for \"394e55c993cafa7829647ebe63ff019946bc9d99b7974b2a488a1202d88854e4\" returns successfully" May 10 10:45:18.080933 containerd[1527]: time="2025-05-10T10:45:18.080903349Z" level=info msg="StartContainer for \"d2307fbcf3de9bcfaab6ad32f98221c097d38cfff457fd5436484604e8e76490\" returns successfully" May 10 10:45:18.138399 containerd[1527]: time="2025-05-10T10:45:18.138119950Z" level=info msg="StartContainer for \"bc937f218614adf9f25523a51318b76c2ce955ab070ac25e5d4b54f4adcf6bd4\" returns successfully" May 10 10:45:18.199858 kubelet[2382]: E0510 10:45:18.199763 2382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:18.201843 kubelet[2382]: E0510 10:45:18.201403 2382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:18.205918 kubelet[2382]: E0510 10:45:18.205901 2382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:18.746073 kubelet[2382]: I0510 10:45:18.744279 2382 kubelet_node_status.go:76] "Attempting to register node" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:19.208715 kubelet[2382]: E0510 10:45:19.206918 2382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:19.209610 kubelet[2382]: E0510 10:45:19.209336 2382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:19.991658 kubelet[2382]: E0510 10:45:19.991593 2382 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:20.110224 kubelet[2382]: E0510 10:45:20.110029 2382 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4330-0-0-n-4c6f505fd4.novalocal.183e2492825a31f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4330-0-0-n-4c6f505fd4.novalocal,UID:ci-4330-0-0-n-4c6f505fd4.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4330-0-0-n-4c6f505fd4.novalocal,},FirstTimestamp:2025-05-10 10:45:17.137105392 +0000 UTC m=+0.601217212,LastTimestamp:2025-05-10 10:45:17.137105392 +0000 UTC m=+0.601217212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4330-0-0-n-4c6f505fd4.novalocal,}" May 10 10:45:20.134874 kubelet[2382]: I0510 10:45:20.134620 2382 apiserver.go:52] "Watching apiserver" May 10 10:45:20.152096 kubelet[2382]: I0510 10:45:20.152071 2382 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 10:45:20.255738 kubelet[2382]: I0510 10:45:20.255032 2382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:20.257445 kubelet[2382]: I0510 10:45:20.256227 2382 kubelet_node_status.go:79] "Successfully registered node" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:20.480184 kubelet[2382]: E0510 10:45:20.480067 2382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:20.480184 kubelet[2382]: I0510 10:45:20.480105 2382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:20.489828 kubelet[2382]: E0510 10:45:20.489769 2382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:20.489828 kubelet[2382]: I0510 10:45:20.489801 2382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:20.494510 kubelet[2382]: E0510 10:45:20.494424 2382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:20.854010 kubelet[2382]: I0510 10:45:20.853956 2382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:20.859274 kubelet[2382]: E0510 10:45:20.858825 2382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:21.803045 kubelet[2382]: I0510 10:45:21.802865 2382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:21.851419 kubelet[2382]: W0510 10:45:21.850878 2382 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:45:23.359802 systemd[1]: Reload requested from client PID 2652 ('systemctl') (unit session-11.scope)... May 10 10:45:23.359820 systemd[1]: Reloading... May 10 10:45:23.462753 zram_generator::config[2693]: No configuration found. May 10 10:45:23.609834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 10:45:23.771192 systemd[1]: Reloading finished in 411 ms. May 10 10:45:23.797224 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:45:23.808542 systemd[1]: kubelet.service: Deactivated successfully. May 10 10:45:23.808770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:45:23.808819 systemd[1]: kubelet.service: Consumed 1.199s CPU time, 125M memory peak. May 10 10:45:23.810512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:45:23.928410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:45:23.942947 (kubelet)[2760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 10:45:24.005721 kubelet[2760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 10:45:24.005721 kubelet[2760]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 10 10:45:24.005721 kubelet[2760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 10:45:24.005721 kubelet[2760]: I0510 10:45:24.005652 2760 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 10:45:24.025413 kubelet[2760]: I0510 10:45:24.022368 2760 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 10 10:45:24.025413 kubelet[2760]: I0510 10:45:24.023770 2760 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 10:45:24.025413 kubelet[2760]: I0510 10:45:24.024353 2760 server.go:954] "Client rotation is on, will bootstrap in background" May 10 10:45:24.029447 kubelet[2760]: I0510 10:45:24.029423 2760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 10:45:24.034159 kubelet[2760]: I0510 10:45:24.034112 2760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 10:45:24.038980 kubelet[2760]: I0510 10:45:24.038950 2760 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 10 10:45:24.044734 kubelet[2760]: I0510 10:45:24.043400 2760 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 10:45:24.045132 kubelet[2760]: I0510 10:45:24.045092 2760 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 10:45:24.045321 kubelet[2760]: I0510 10:45:24.045130 2760 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4330-0-0-n-4c6f505fd4.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 10:45:24.045427 kubelet[2760]: I0510 10:45:24.045326 2760 topology_manager.go:138] "Creating topology manager with none policy" May 10 10:45:24.045427 kubelet[2760]: I0510 10:45:24.045338 2760 container_manager_linux.go:304] "Creating device plugin manager" May 10 10:45:24.045427 kubelet[2760]: I0510 10:45:24.045371 2760 state_mem.go:36] "Initialized new in-memory state store" May 10 10:45:24.045537 kubelet[2760]: I0510 10:45:24.045495 2760 kubelet.go:446] "Attempting to sync node with API server" May 10 10:45:24.045537 kubelet[2760]: I0510 10:45:24.045509 2760 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 10:45:24.045537 kubelet[2760]: I0510 10:45:24.045533 2760 kubelet.go:352] "Adding apiserver pod source" May 10 10:45:24.045796 kubelet[2760]: I0510 10:45:24.045545 2760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 10:45:24.046835 kubelet[2760]: I0510 10:45:24.046640 2760 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 10 10:45:24.048802 kubelet[2760]: I0510 10:45:24.048168 2760 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 10:45:24.048802 kubelet[2760]: I0510 10:45:24.048776 2760 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 10 10:45:24.048882 kubelet[2760]: I0510 10:45:24.048831 2760 server.go:1287] "Started kubelet" May 10 10:45:24.053719 kubelet[2760]: I0510 10:45:24.051857 2760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 10:45:24.055779 kubelet[2760]: I0510 10:45:24.055750 2760 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 10 10:45:24.058282 kubelet[2760]: I0510 10:45:24.058224 2760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 10:45:24.058587 kubelet[2760]: I0510 10:45:24.058567 2760 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 10:45:24.059238 kubelet[2760]: I0510 10:45:24.059219 2760 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 10:45:24.061897 kubelet[2760]: I0510 10:45:24.061862 2760 volume_manager.go:297] "Starting Kubelet Volume Manager" May 10 10:45:24.062113 kubelet[2760]: E0510 10:45:24.062089 2760 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" not found" May 10 10:45:24.067852 kubelet[2760]: I0510 10:45:24.067817 2760 server.go:490] "Adding debug handlers to kubelet server" May 10 10:45:24.073303 kubelet[2760]: I0510 10:45:24.071362 2760 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 10:45:24.073303 kubelet[2760]: I0510 10:45:24.071479 2760 reconciler.go:26] "Reconciler: start to sync state" May 10 10:45:24.074066 kubelet[2760]: I0510 10:45:24.074038 2760 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 10:45:24.078616 kubelet[2760]: I0510 10:45:24.078225 2760 factory.go:221] Registration of the containerd container factory successfully May 10 10:45:24.078616 kubelet[2760]: I0510 10:45:24.078245 2760 factory.go:221] Registration of the systemd container factory successfully May 10 10:45:24.091158 kubelet[2760]: I0510 10:45:24.090939 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 10:45:24.093118 kubelet[2760]: I0510 10:45:24.093097 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 10:45:24.093298 kubelet[2760]: I0510 10:45:24.093286 2760 status_manager.go:227] "Starting to sync pod status with apiserver" May 10 10:45:24.093379 kubelet[2760]: I0510 10:45:24.093368 2760 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 10 10:45:24.093438 kubelet[2760]: I0510 10:45:24.093430 2760 kubelet.go:2388] "Starting kubelet main sync loop" May 10 10:45:24.093752 kubelet[2760]: E0510 10:45:24.093527 2760 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 10:45:24.154719 kubelet[2760]: I0510 10:45:24.154667 2760 cpu_manager.go:221] "Starting CPU manager" policy="none" May 10 10:45:24.155044 kubelet[2760]: I0510 10:45:24.154686 2760 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 10 10:45:24.155044 kubelet[2760]: I0510 10:45:24.154881 2760 state_mem.go:36] "Initialized new in-memory state store" May 10 10:45:24.155473 kubelet[2760]: I0510 10:45:24.155456 2760 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 10:45:24.155574 kubelet[2760]: I0510 10:45:24.155537 2760 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 10:45:24.156170 kubelet[2760]: I0510 10:45:24.155763 2760 policy_none.go:49] "None policy: Start" May 10 10:45:24.156170 kubelet[2760]: I0510 10:45:24.155780 2760 memory_manager.go:186] "Starting memorymanager" policy="None" May 10 10:45:24.156170 kubelet[2760]: I0510 10:45:24.155792 2760 state_mem.go:35] "Initializing new in-memory state store" May 10 10:45:24.156170 kubelet[2760]: I0510 10:45:24.155965 2760 state_mem.go:75] "Updated machine memory state" May 10 10:45:24.164728 kubelet[2760]: I0510 10:45:24.164651 2760 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 10:45:24.165565 kubelet[2760]: I0510 10:45:24.164847 2760 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 10:45:24.165565 kubelet[2760]: I0510 10:45:24.164865 2760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 10:45:24.165565 kubelet[2760]: I0510 10:45:24.165137 2760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 10:45:24.167794 kubelet[2760]: E0510 10:45:24.167430 2760 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 10 10:45:24.194363 kubelet[2760]: I0510 10:45:24.194272 2760 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.194492 kubelet[2760]: I0510 10:45:24.194476 2760 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.196953 kubelet[2760]: I0510 10:45:24.194992 2760 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.272952 kubelet[2760]: I0510 10:45:24.272918 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9c4d39ffcac7412af67474b699f78e7-ca-certs\") pod \"kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"e9c4d39ffcac7412af67474b699f78e7\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.273684 kubelet[2760]: I0510 10:45:24.273292 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9c4d39ffcac7412af67474b699f78e7-k8s-certs\") pod \"kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"e9c4d39ffcac7412af67474b699f78e7\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.274387 kubelet[2760]: I0510 10:45:24.273970 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9c4d39ffcac7412af67474b699f78e7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"e9c4d39ffcac7412af67474b699f78e7\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.274387 kubelet[2760]: I0510 10:45:24.274109 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-ca-certs\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.274387 kubelet[2760]: I0510 10:45:24.274146 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-k8s-certs\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.274387 kubelet[2760]: I0510 10:45:24.274170 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-kubeconfig\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.274525 kubelet[2760]: I0510 10:45:24.274202 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.274525 kubelet[2760]: I0510 10:45:24.274227 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4c4cd00f6dc44ad79352356595646c9-kubeconfig\") pod \"kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"f4c4cd00f6dc44ad79352356595646c9\") " pod="kube-system/kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.274525 kubelet[2760]: I0510 10:45:24.274248 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2de883a43c5509e43655a0908e464dd1-flexvolume-dir\") pod \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" (UID: \"2de883a43c5509e43655a0908e464dd1\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.277724 kubelet[2760]: I0510 10:45:24.275684 2760 kubelet_node_status.go:76] "Attempting to register node" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.277724 kubelet[2760]: W0510 10:45:24.276116 2760 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:45:24.278108 kubelet[2760]: W0510 10:45:24.278094 2760 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:45:24.284590 kubelet[2760]: W0510 10:45:24.284565 2760 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:45:24.284867 kubelet[2760]: E0510 10:45:24.284783 2760 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.301264 kubelet[2760]: I0510 10:45:24.299259 2760 kubelet_node_status.go:125] "Node was previously registered" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.301264 kubelet[2760]: I0510 10:45:24.299328 2760 kubelet_node_status.go:79] "Successfully registered node" node="ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:24.369595 sudo[2797]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 10:45:24.369963 sudo[2797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 10 10:45:25.046537 kubelet[2760]: I0510 10:45:25.046507 2760 apiserver.go:52] "Watching apiserver" May 10 10:45:25.125814 kubelet[2760]: I0510 10:45:25.124819 2760 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 10:45:25.132981 kubelet[2760]: I0510 10:45:25.132172 2760 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:25.142257 kubelet[2760]: W0510 10:45:25.140896 2760 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:45:25.142257 kubelet[2760]: E0510 10:45:25.140988 2760 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal" May 10 10:45:25.164333 sudo[2797]: pam_unix(sudo:session): session closed for user root May 10 10:45:25.191744 kubelet[2760]: I0510 10:45:25.191675 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4330-0-0-n-4c6f505fd4.novalocal" podStartSLOduration=4.191549146 podStartE2EDuration="4.191549146s" podCreationTimestamp="2025-05-10 10:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:45:25.177184102 +0000 UTC m=+1.227857347" watchObservedRunningTime="2025-05-10 10:45:25.191549146 +0000 UTC m=+1.242222401" May 10 10:45:25.210072 kubelet[2760]: I0510 10:45:25.209920 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4330-0-0-n-4c6f505fd4.novalocal" podStartSLOduration=1.209861575 podStartE2EDuration="1.209861575s" podCreationTimestamp="2025-05-10 10:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:45:25.192620922 +0000 UTC m=+1.243294167" watchObservedRunningTime="2025-05-10 10:45:25.209861575 +0000 UTC m=+1.260534820" May 10 10:45:25.227890 kubelet[2760]: I0510 10:45:25.227663 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4330-0-0-n-4c6f505fd4.novalocal" podStartSLOduration=1.227645902 podStartE2EDuration="1.227645902s" podCreationTimestamp="2025-05-10 10:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:45:25.210037792 +0000 UTC m=+1.260711037" watchObservedRunningTime="2025-05-10 10:45:25.227645902 +0000 UTC m=+1.278319147" May 10 10:45:28.018816 kubelet[2760]: I0510 10:45:28.018744 2760 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 10:45:28.019771 containerd[1527]: time="2025-05-10T10:45:28.019500742Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 10:45:28.020203 kubelet[2760]: I0510 10:45:28.019735 2760 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 10:45:28.238372 sudo[1798]: pam_unix(sudo:session): session closed for user root May 10 10:45:28.503104 sshd[1797]: Connection closed by 172.24.4.1 port 52382 May 10 10:45:28.504170 sshd-session[1794]: pam_unix(sshd:session): session closed for user core May 10 10:45:28.511687 systemd[1]: sshd@8-172.24.4.188:22-172.24.4.1:52382.service: Deactivated successfully. May 10 10:45:28.516199 systemd[1]: session-11.scope: Deactivated successfully. May 10 10:45:28.516671 systemd[1]: session-11.scope: Consumed 7.374s CPU time, 270.9M memory peak. May 10 10:45:28.523168 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. May 10 10:45:28.529034 systemd-logind[1498]: Removed session 11. May 10 10:45:28.813966 kubelet[2760]: W0510 10:45:28.811814 2760 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4330-0-0-n-4c6f505fd4.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object May 10 10:45:28.813966 kubelet[2760]: E0510 10:45:28.811860 2760 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4330-0-0-n-4c6f505fd4.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object" logger="UnhandledError" May 10 10:45:28.813966 kubelet[2760]: W0510 10:45:28.811998 2760 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4330-0-0-n-4c6f505fd4.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object May 10 10:45:28.813966 kubelet[2760]: E0510 10:45:28.812016 2760 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4330-0-0-n-4c6f505fd4.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object" logger="UnhandledError" May 10 10:45:28.814144 kubelet[2760]: I0510 10:45:28.812066 2760 status_manager.go:890] "Failed to get status for pod" podUID="58b6bae0-f13b-41ea-adc6-b527c074bb1a" pod="kube-system/cilium-xmhml" err="pods \"cilium-xmhml\" is forbidden: User \"system:node:ci-4330-0-0-n-4c6f505fd4.novalocal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object" May 10 10:45:28.814144 kubelet[2760]: W0510 10:45:28.812220 2760 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4330-0-0-n-4c6f505fd4.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object May 10 10:45:28.814144 kubelet[2760]: E0510 10:45:28.812240 2760 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4330-0-0-n-4c6f505fd4.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object" logger="UnhandledError" May 10 10:45:28.814144 kubelet[2760]: W0510 10:45:28.812277 2760 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4330-0-0-n-4c6f505fd4.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object May 10 10:45:28.814144 kubelet[2760]: W0510 10:45:28.812290 2760 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4330-0-0-n-4c6f505fd4.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object May 10 10:45:28.814280 kubelet[2760]: E0510 10:45:28.812289 2760 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4330-0-0-n-4c6f505fd4.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object" logger="UnhandledError" May 10 10:45:28.814280 kubelet[2760]: E0510 10:45:28.812323 2760 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4330-0-0-n-4c6f505fd4.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4330-0-0-n-4c6f505fd4.novalocal' and this object" logger="UnhandledError" May 10 10:45:28.819603 systemd[1]: Created slice kubepods-burstable-pod58b6bae0_f13b_41ea_adc6_b527c074bb1a.slice - libcontainer container kubepods-burstable-pod58b6bae0_f13b_41ea_adc6_b527c074bb1a.slice. May 10 10:45:28.828900 systemd[1]: Created slice kubepods-besteffort-podf8d2abf6_317c_443a_bdb1_406138d7e855.slice - libcontainer container kubepods-besteffort-podf8d2abf6_317c_443a_bdb1_406138d7e855.slice. May 10 10:45:28.966009 kubelet[2760]: I0510 10:45:28.964518 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdszw\" (UniqueName: \"kubernetes.io/projected/f8d2abf6-317c-443a-bdb1-406138d7e855-kube-api-access-xdszw\") pod \"kube-proxy-pdnqb\" (UID: \"f8d2abf6-317c-443a-bdb1-406138d7e855\") " pod="kube-system/kube-proxy-pdnqb" May 10 10:45:28.966009 kubelet[2760]: I0510 10:45:28.965110 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-lib-modules\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.966009 kubelet[2760]: I0510 10:45:28.965175 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8d2abf6-317c-443a-bdb1-406138d7e855-lib-modules\") pod \"kube-proxy-pdnqb\" (UID: \"f8d2abf6-317c-443a-bdb1-406138d7e855\") " pod="kube-system/kube-proxy-pdnqb" May 10 10:45:28.966009 kubelet[2760]: I0510 10:45:28.965218 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-host-proc-sys-kernel\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.966009 kubelet[2760]: I0510 10:45:28.965263 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-xtables-lock\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.966485 kubelet[2760]: I0510 10:45:28.965584 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-cgroup\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969194 kubelet[2760]: I0510 10:45:28.967104 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-run\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969194 kubelet[2760]: I0510 10:45:28.967286 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8d2abf6-317c-443a-bdb1-406138d7e855-xtables-lock\") pod \"kube-proxy-pdnqb\" (UID: \"f8d2abf6-317c-443a-bdb1-406138d7e855\") " pod="kube-system/kube-proxy-pdnqb" May 10 10:45:28.969194 kubelet[2760]: I0510 10:45:28.967388 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-hostproc\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969194 kubelet[2760]: I0510 10:45:28.967429 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-host-proc-sys-net\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969194 kubelet[2760]: I0510 10:45:28.967681 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8d2abf6-317c-443a-bdb1-406138d7e855-kube-proxy\") pod \"kube-proxy-pdnqb\" (UID: \"f8d2abf6-317c-443a-bdb1-406138d7e855\") " pod="kube-system/kube-proxy-pdnqb" May 10 10:45:28.969194 kubelet[2760]: I0510 10:45:28.968061 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-bpf-maps\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969485 kubelet[2760]: I0510 10:45:28.968276 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58b6bae0-f13b-41ea-adc6-b527c074bb1a-clustermesh-secrets\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969485 kubelet[2760]: I0510 10:45:28.968357 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-config-path\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969485 kubelet[2760]: I0510 10:45:28.968498 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58b6bae0-f13b-41ea-adc6-b527c074bb1a-hubble-tls\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969485 kubelet[2760]: I0510 10:45:28.968554 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggwj\" (UniqueName: \"kubernetes.io/projected/58b6bae0-f13b-41ea-adc6-b527c074bb1a-kube-api-access-vggwj\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969485 kubelet[2760]: I0510 10:45:28.968594 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cni-path\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:28.969485 kubelet[2760]: I0510 10:45:28.968633 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-etc-cni-netd\") pod \"cilium-xmhml\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " pod="kube-system/cilium-xmhml" May 10 10:45:29.072794 kubelet[2760]: I0510 10:45:29.070862 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pd7q\" (UniqueName: \"kubernetes.io/projected/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-kube-api-access-5pd7q\") pod \"cilium-operator-6c4d7847fc-lljcc\" (UID: \"fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f\") " pod="kube-system/cilium-operator-6c4d7847fc-lljcc" May 10 10:45:29.072794 kubelet[2760]: I0510 10:45:29.071041 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lljcc\" (UID: \"fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f\") " pod="kube-system/cilium-operator-6c4d7847fc-lljcc" May 10 10:45:29.076776 systemd[1]: Created slice kubepods-besteffort-podfc5f0b2e_fa7c_4dac_afac_3bf87e9a9c0f.slice - libcontainer container kubepods-besteffort-podfc5f0b2e_fa7c_4dac_afac_3bf87e9a9c0f.slice. May 10 10:45:30.073111 kubelet[2760]: E0510 10:45:30.071964 2760 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 10 10:45:30.073111 kubelet[2760]: E0510 10:45:30.072137 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f8d2abf6-317c-443a-bdb1-406138d7e855-kube-proxy podName:f8d2abf6-317c-443a-bdb1-406138d7e855 nodeName:}" failed. No retries permitted until 2025-05-10 10:45:30.572095013 +0000 UTC m=+6.622768318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f8d2abf6-317c-443a-bdb1-406138d7e855-kube-proxy") pod "kube-proxy-pdnqb" (UID: "f8d2abf6-317c-443a-bdb1-406138d7e855") : failed to sync configmap cache: timed out waiting for the condition May 10 10:45:30.073111 kubelet[2760]: E0510 10:45:30.072580 2760 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 10 10:45:30.073111 kubelet[2760]: E0510 10:45:30.072658 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b6bae0-f13b-41ea-adc6-b527c074bb1a-clustermesh-secrets podName:58b6bae0-f13b-41ea-adc6-b527c074bb1a nodeName:}" failed. No retries permitted until 2025-05-10 10:45:30.572633589 +0000 UTC m=+6.623306884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/58b6bae0-f13b-41ea-adc6-b527c074bb1a-clustermesh-secrets") pod "cilium-xmhml" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a") : failed to sync secret cache: timed out waiting for the condition May 10 10:45:30.073111 kubelet[2760]: E0510 10:45:30.072939 2760 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 10 10:45:30.074357 kubelet[2760]: E0510 10:45:30.073049 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-config-path podName:58b6bae0-f13b-41ea-adc6-b527c074bb1a nodeName:}" failed. No retries permitted until 2025-05-10 10:45:30.573003201 +0000 UTC m=+6.623676496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-config-path") pod "cilium-xmhml" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a") : failed to sync configmap cache: timed out waiting for the condition May 10 10:45:30.173155 kubelet[2760]: E0510 10:45:30.172869 2760 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 10 10:45:30.173155 kubelet[2760]: E0510 10:45:30.173074 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-cilium-config-path podName:fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f nodeName:}" failed. No retries permitted until 2025-05-10 10:45:30.67301107 +0000 UTC m=+6.723684365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-cilium-config-path") pod "cilium-operator-6c4d7847fc-lljcc" (UID: "fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f") : failed to sync configmap cache: timed out waiting for the condition May 10 10:45:30.626086 containerd[1527]: time="2025-05-10T10:45:30.625973176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmhml,Uid:58b6bae0-f13b-41ea-adc6-b527c074bb1a,Namespace:kube-system,Attempt:0,}" May 10 10:45:30.639525 containerd[1527]: time="2025-05-10T10:45:30.639465339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdnqb,Uid:f8d2abf6-317c-443a-bdb1-406138d7e855,Namespace:kube-system,Attempt:0,}" May 10 10:45:30.721821 containerd[1527]: time="2025-05-10T10:45:30.721743303Z" level=info msg="connecting to shim 34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282" address="unix:///run/containerd/s/8035c200bf5ecd2329672e907b0dacf288b89dd935b1158dc4919989ccf77677" namespace=k8s.io protocol=ttrpc version=3 May 10 10:45:30.734604 containerd[1527]: time="2025-05-10T10:45:30.733805857Z" level=info msg="connecting to shim 49a49212e6425194b17dff58cc90d5f50aaef1ad586febaf136bb221f4cc18e2" address="unix:///run/containerd/s/e222d6d65edef9c2763cf5e9af6fcdd74f0f2fe10df84a8b6bff57bcec7f031d" namespace=k8s.io protocol=ttrpc version=3 May 10 10:45:30.762022 systemd[1]: Started cri-containerd-34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282.scope - libcontainer container 34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282. May 10 10:45:30.787880 systemd[1]: Started cri-containerd-49a49212e6425194b17dff58cc90d5f50aaef1ad586febaf136bb221f4cc18e2.scope - libcontainer container 49a49212e6425194b17dff58cc90d5f50aaef1ad586febaf136bb221f4cc18e2. May 10 10:45:30.822559 containerd[1527]: time="2025-05-10T10:45:30.822512781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmhml,Uid:58b6bae0-f13b-41ea-adc6-b527c074bb1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\"" May 10 10:45:30.824741 containerd[1527]: time="2025-05-10T10:45:30.824679590Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 10:45:30.836366 containerd[1527]: time="2025-05-10T10:45:30.836334024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdnqb,Uid:f8d2abf6-317c-443a-bdb1-406138d7e855,Namespace:kube-system,Attempt:0,} returns sandbox id \"49a49212e6425194b17dff58cc90d5f50aaef1ad586febaf136bb221f4cc18e2\"" May 10 10:45:30.840742 containerd[1527]: time="2025-05-10T10:45:30.840070867Z" level=info msg="CreateContainer within sandbox \"49a49212e6425194b17dff58cc90d5f50aaef1ad586febaf136bb221f4cc18e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 10:45:30.861919 containerd[1527]: time="2025-05-10T10:45:30.860094280Z" level=info msg="Container dccf7dfdbd9877fa297b47539852c605a1c51058b15a99010dde3737e1f6fa7d: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:30.878954 containerd[1527]: time="2025-05-10T10:45:30.878872920Z" level=info msg="CreateContainer within sandbox \"49a49212e6425194b17dff58cc90d5f50aaef1ad586febaf136bb221f4cc18e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dccf7dfdbd9877fa297b47539852c605a1c51058b15a99010dde3737e1f6fa7d\"" May 10 10:45:30.879638 containerd[1527]: time="2025-05-10T10:45:30.879618706Z" level=info msg="StartContainer for \"dccf7dfdbd9877fa297b47539852c605a1c51058b15a99010dde3737e1f6fa7d\"" May 10 10:45:30.882112 containerd[1527]: time="2025-05-10T10:45:30.882081277Z" level=info msg="connecting to shim dccf7dfdbd9877fa297b47539852c605a1c51058b15a99010dde3737e1f6fa7d" address="unix:///run/containerd/s/e222d6d65edef9c2763cf5e9af6fcdd74f0f2fe10df84a8b6bff57bcec7f031d" protocol=ttrpc version=3 May 10 10:45:30.884308 containerd[1527]: time="2025-05-10T10:45:30.884283899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lljcc,Uid:fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f,Namespace:kube-system,Attempt:0,}" May 10 10:45:30.904902 systemd[1]: Started cri-containerd-dccf7dfdbd9877fa297b47539852c605a1c51058b15a99010dde3737e1f6fa7d.scope - libcontainer container dccf7dfdbd9877fa297b47539852c605a1c51058b15a99010dde3737e1f6fa7d. May 10 10:45:30.918670 containerd[1527]: time="2025-05-10T10:45:30.918619953Z" level=info msg="connecting to shim ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78" address="unix:///run/containerd/s/39774eeec81686f62754b26596acdd0b0cccab25e71710cdc3e51a2d1889cdd8" namespace=k8s.io protocol=ttrpc version=3 May 10 10:45:31.024896 systemd[1]: Started cri-containerd-ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78.scope - libcontainer container ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78. May 10 10:45:31.041356 containerd[1527]: time="2025-05-10T10:45:31.041270293Z" level=info msg="StartContainer for \"dccf7dfdbd9877fa297b47539852c605a1c51058b15a99010dde3737e1f6fa7d\" returns successfully" May 10 10:45:31.090842 containerd[1527]: time="2025-05-10T10:45:31.090396689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lljcc,Uid:fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\"" May 10 10:45:31.165513 kubelet[2760]: I0510 10:45:31.165390 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pdnqb" podStartSLOduration=3.165374309 podStartE2EDuration="3.165374309s" podCreationTimestamp="2025-05-10 10:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:45:31.165082087 +0000 UTC m=+7.215755362" watchObservedRunningTime="2025-05-10 10:45:31.165374309 +0000 UTC m=+7.216047554" May 10 10:45:37.760455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660375852.mount: Deactivated successfully. May 10 10:45:41.948521 containerd[1527]: time="2025-05-10T10:45:41.948351192Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:41.950808 containerd[1527]: time="2025-05-10T10:45:41.950680062Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 10 10:45:41.953430 containerd[1527]: time="2025-05-10T10:45:41.953327367Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:41.958239 containerd[1527]: time="2025-05-10T10:45:41.957261793Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.132372168s" May 10 10:45:41.958239 containerd[1527]: time="2025-05-10T10:45:41.957347682Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 10:45:41.961507 containerd[1527]: time="2025-05-10T10:45:41.961230308Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 10:45:41.965786 containerd[1527]: time="2025-05-10T10:45:41.965627524Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 10:45:42.008782 containerd[1527]: time="2025-05-10T10:45:42.007057740Z" level=info msg="Container 862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:42.021630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4166955973.mount: Deactivated successfully. May 10 10:45:42.036710 containerd[1527]: time="2025-05-10T10:45:42.036657241Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\"" May 10 10:45:42.037998 containerd[1527]: time="2025-05-10T10:45:42.037977933Z" level=info msg="StartContainer for \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\"" May 10 10:45:42.039209 containerd[1527]: time="2025-05-10T10:45:42.039185132Z" level=info msg="connecting to shim 862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111" address="unix:///run/containerd/s/8035c200bf5ecd2329672e907b0dacf288b89dd935b1158dc4919989ccf77677" protocol=ttrpc version=3 May 10 10:45:42.073839 systemd[1]: Started cri-containerd-862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111.scope - libcontainer container 862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111. May 10 10:45:42.165771 containerd[1527]: time="2025-05-10T10:45:42.165533649Z" level=info msg="StartContainer for \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\" returns successfully" May 10 10:45:42.180884 systemd[1]: cri-containerd-862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111.scope: Deactivated successfully. May 10 10:45:42.182861 containerd[1527]: time="2025-05-10T10:45:42.182821663Z" level=info msg="received exit event container_id:\"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\" id:\"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\" pid:3180 exited_at:{seconds:1746873942 nanos:181541361}" May 10 10:45:42.184068 containerd[1527]: time="2025-05-10T10:45:42.184031727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\" id:\"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\" pid:3180 exited_at:{seconds:1746873942 nanos:181541361}" May 10 10:45:42.212657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111-rootfs.mount: Deactivated successfully. May 10 10:45:44.205745 containerd[1527]: time="2025-05-10T10:45:44.204507305Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 10:45:44.241808 containerd[1527]: time="2025-05-10T10:45:44.235986649Z" level=info msg="Container 4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:44.252810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736007368.mount: Deactivated successfully. May 10 10:45:44.269596 containerd[1527]: time="2025-05-10T10:45:44.269561208Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\"" May 10 10:45:44.270808 containerd[1527]: time="2025-05-10T10:45:44.270745132Z" level=info msg="StartContainer for \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\"" May 10 10:45:44.272896 containerd[1527]: time="2025-05-10T10:45:44.272583646Z" level=info msg="connecting to shim 4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586" address="unix:///run/containerd/s/8035c200bf5ecd2329672e907b0dacf288b89dd935b1158dc4919989ccf77677" protocol=ttrpc version=3 May 10 10:45:44.302378 systemd[1]: Started cri-containerd-4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586.scope - libcontainer container 4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586. May 10 10:45:44.352976 containerd[1527]: time="2025-05-10T10:45:44.352846610Z" level=info msg="StartContainer for \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\" returns successfully" May 10 10:45:44.364990 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 10:45:44.365259 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 10:45:44.365650 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 10 10:45:44.368070 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 10:45:44.371086 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 10:45:44.373178 containerd[1527]: time="2025-05-10T10:45:44.373086755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\" id:\"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\" pid:3222 exited_at:{seconds:1746873944 nanos:372580034}" May 10 10:45:44.373406 containerd[1527]: time="2025-05-10T10:45:44.373276856Z" level=info msg="received exit event container_id:\"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\" id:\"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\" pid:3222 exited_at:{seconds:1746873944 nanos:372580034}" May 10 10:45:44.373480 systemd[1]: cri-containerd-4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586.scope: Deactivated successfully. May 10 10:45:44.421241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586-rootfs.mount: Deactivated successfully. May 10 10:45:44.431349 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 10:45:45.217252 containerd[1527]: time="2025-05-10T10:45:45.217160185Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 10:45:45.279193 containerd[1527]: time="2025-05-10T10:45:45.279158744Z" level=info msg="Container dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:45.281193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525658432.mount: Deactivated successfully. May 10 10:45:45.308083 containerd[1527]: time="2025-05-10T10:45:45.308044280Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\"" May 10 10:45:45.308920 containerd[1527]: time="2025-05-10T10:45:45.308886874Z" level=info msg="StartContainer for \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\"" May 10 10:45:45.311135 containerd[1527]: time="2025-05-10T10:45:45.311103891Z" level=info msg="connecting to shim dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2" address="unix:///run/containerd/s/8035c200bf5ecd2329672e907b0dacf288b89dd935b1158dc4919989ccf77677" protocol=ttrpc version=3 May 10 10:45:45.352854 systemd[1]: Started cri-containerd-dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2.scope - libcontainer container dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2. May 10 10:45:45.405642 systemd[1]: cri-containerd-dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2.scope: Deactivated successfully. May 10 10:45:45.409518 containerd[1527]: time="2025-05-10T10:45:45.409356044Z" level=info msg="received exit event container_id:\"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\" id:\"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\" pid:3279 exited_at:{seconds:1746873945 nanos:408387053}" May 10 10:45:45.413104 containerd[1527]: time="2025-05-10T10:45:45.413085110Z" level=info msg="StartContainer for \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\" returns successfully" May 10 10:45:45.413234 containerd[1527]: time="2025-05-10T10:45:45.413199354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\" id:\"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\" pid:3279 exited_at:{seconds:1746873945 nanos:408387053}" May 10 10:45:46.176782 containerd[1527]: time="2025-05-10T10:45:46.176742072Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:46.177915 containerd[1527]: time="2025-05-10T10:45:46.177849550Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 10 10:45:46.179039 containerd[1527]: time="2025-05-10T10:45:46.178954843Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:45:46.180714 containerd[1527]: time="2025-05-10T10:45:46.180446178Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.219159269s" May 10 10:45:46.180714 containerd[1527]: time="2025-05-10T10:45:46.180484393Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 10:45:46.184722 containerd[1527]: time="2025-05-10T10:45:46.183003931Z" level=info msg="CreateContainer within sandbox \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 10:45:46.191802 containerd[1527]: time="2025-05-10T10:45:46.191774778Z" level=info msg="Container 4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:46.202145 containerd[1527]: time="2025-05-10T10:45:46.202096617Z" level=info msg="CreateContainer within sandbox \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\"" May 10 10:45:46.202918 containerd[1527]: time="2025-05-10T10:45:46.202841498Z" level=info msg="StartContainer for \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\"" May 10 10:45:46.204128 containerd[1527]: time="2025-05-10T10:45:46.203989414Z" level=info msg="connecting to shim 4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982" address="unix:///run/containerd/s/39774eeec81686f62754b26596acdd0b0cccab25e71710cdc3e51a2d1889cdd8" protocol=ttrpc version=3 May 10 10:45:46.221485 containerd[1527]: time="2025-05-10T10:45:46.221441392Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 10:45:46.236967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2-rootfs.mount: Deactivated successfully. May 10 10:45:46.243874 systemd[1]: Started cri-containerd-4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982.scope - libcontainer container 4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982. May 10 10:45:46.256818 containerd[1527]: time="2025-05-10T10:45:46.251067499Z" level=info msg="Container 7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:46.260813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090037636.mount: Deactivated successfully. May 10 10:45:46.270148 containerd[1527]: time="2025-05-10T10:45:46.270116960Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\"" May 10 10:45:46.271588 containerd[1527]: time="2025-05-10T10:45:46.271554661Z" level=info msg="StartContainer for \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\"" May 10 10:45:46.274072 containerd[1527]: time="2025-05-10T10:45:46.274043199Z" level=info msg="connecting to shim 7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f" address="unix:///run/containerd/s/8035c200bf5ecd2329672e907b0dacf288b89dd935b1158dc4919989ccf77677" protocol=ttrpc version=3 May 10 10:45:46.315082 systemd[1]: Started cri-containerd-7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f.scope - libcontainer container 7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f. May 10 10:45:46.323976 containerd[1527]: time="2025-05-10T10:45:46.323900899Z" level=info msg="StartContainer for \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" returns successfully" May 10 10:45:46.349515 systemd[1]: cri-containerd-7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f.scope: Deactivated successfully. May 10 10:45:46.351359 containerd[1527]: time="2025-05-10T10:45:46.351292172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\" id:\"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\" pid:3349 exited_at:{seconds:1746873946 nanos:350647265}" May 10 10:45:46.353583 containerd[1527]: time="2025-05-10T10:45:46.353446879Z" level=info msg="received exit event container_id:\"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\" id:\"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\" pid:3349 exited_at:{seconds:1746873946 nanos:350647265}" May 10 10:45:46.365472 containerd[1527]: time="2025-05-10T10:45:46.365388442Z" level=info msg="StartContainer for \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\" returns successfully" May 10 10:45:46.398445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f-rootfs.mount: Deactivated successfully. May 10 10:45:47.312736 containerd[1527]: time="2025-05-10T10:45:47.312099597Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 10:45:47.336321 containerd[1527]: time="2025-05-10T10:45:47.336274823Z" level=info msg="Container 98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:47.351550 containerd[1527]: time="2025-05-10T10:45:47.351507223Z" level=info msg="CreateContainer within sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\"" May 10 10:45:47.352295 containerd[1527]: time="2025-05-10T10:45:47.352271761Z" level=info msg="StartContainer for \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\"" May 10 10:45:47.353990 containerd[1527]: time="2025-05-10T10:45:47.353960115Z" level=info msg="connecting to shim 98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253" address="unix:///run/containerd/s/8035c200bf5ecd2329672e907b0dacf288b89dd935b1158dc4919989ccf77677" protocol=ttrpc version=3 May 10 10:45:47.444853 systemd[1]: Started cri-containerd-98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253.scope - libcontainer container 98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253. May 10 10:45:47.512398 kubelet[2760]: I0510 10:45:47.511864 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lljcc" podStartSLOduration=4.422383373 podStartE2EDuration="19.511843425s" podCreationTimestamp="2025-05-10 10:45:28 +0000 UTC" firstStartedPulling="2025-05-10 10:45:31.092033336 +0000 UTC m=+7.142706591" lastFinishedPulling="2025-05-10 10:45:46.181493398 +0000 UTC m=+22.232166643" observedRunningTime="2025-05-10 10:45:47.403168757 +0000 UTC m=+23.453842022" watchObservedRunningTime="2025-05-10 10:45:47.511843425 +0000 UTC m=+23.562516680" May 10 10:45:47.579083 containerd[1527]: time="2025-05-10T10:45:47.578985337Z" level=info msg="StartContainer for \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" returns successfully" May 10 10:45:48.010413 containerd[1527]: time="2025-05-10T10:45:48.010324950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" id:\"38dad624aaef14c8a45e8a0cb7888140cccabfcf942426d8212111e5b8c50b3c\" pid:3417 exited_at:{seconds:1746873948 nanos:10001521}" May 10 10:45:48.065555 kubelet[2760]: I0510 10:45:48.065521 2760 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 10 10:45:48.111794 systemd[1]: Created slice kubepods-burstable-pod9e7c4047_e125_4202_8632_e130f272ed2b.slice - libcontainer container kubepods-burstable-pod9e7c4047_e125_4202_8632_e130f272ed2b.slice. May 10 10:45:48.119770 systemd[1]: Created slice kubepods-burstable-pod13d4c605_6eb0_4856_b9f0_e613d8bf57ba.slice - libcontainer container kubepods-burstable-pod13d4c605_6eb0_4856_b9f0_e613d8bf57ba.slice. May 10 10:45:48.135247 kubelet[2760]: I0510 10:45:48.135019 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e7c4047-e125-4202-8632-e130f272ed2b-config-volume\") pod \"coredns-668d6bf9bc-qqpsh\" (UID: \"9e7c4047-e125-4202-8632-e130f272ed2b\") " pod="kube-system/coredns-668d6bf9bc-qqpsh" May 10 10:45:48.135247 kubelet[2760]: I0510 10:45:48.135079 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13d4c605-6eb0-4856-b9f0-e613d8bf57ba-config-volume\") pod \"coredns-668d6bf9bc-spcb5\" (UID: \"13d4c605-6eb0-4856-b9f0-e613d8bf57ba\") " pod="kube-system/coredns-668d6bf9bc-spcb5" May 10 10:45:48.135247 kubelet[2760]: I0510 10:45:48.135120 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgk9f\" (UniqueName: \"kubernetes.io/projected/9e7c4047-e125-4202-8632-e130f272ed2b-kube-api-access-tgk9f\") pod \"coredns-668d6bf9bc-qqpsh\" (UID: \"9e7c4047-e125-4202-8632-e130f272ed2b\") " pod="kube-system/coredns-668d6bf9bc-qqpsh" May 10 10:45:48.135247 kubelet[2760]: I0510 10:45:48.135145 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrsp\" (UniqueName: \"kubernetes.io/projected/13d4c605-6eb0-4856-b9f0-e613d8bf57ba-kube-api-access-fwrsp\") pod \"coredns-668d6bf9bc-spcb5\" (UID: \"13d4c605-6eb0-4856-b9f0-e613d8bf57ba\") " pod="kube-system/coredns-668d6bf9bc-spcb5" May 10 10:45:48.344542 kubelet[2760]: I0510 10:45:48.344406 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xmhml" podStartSLOduration=9.209227753 podStartE2EDuration="20.344387386s" podCreationTimestamp="2025-05-10 10:45:28 +0000 UTC" firstStartedPulling="2025-05-10 10:45:30.824351743 +0000 UTC m=+6.875024999" lastFinishedPulling="2025-05-10 10:45:41.959511337 +0000 UTC m=+18.010184632" observedRunningTime="2025-05-10 10:45:48.344107051 +0000 UTC m=+24.394780316" watchObservedRunningTime="2025-05-10 10:45:48.344387386 +0000 UTC m=+24.395060631" May 10 10:45:48.417100 containerd[1527]: time="2025-05-10T10:45:48.417063357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqpsh,Uid:9e7c4047-e125-4202-8632-e130f272ed2b,Namespace:kube-system,Attempt:0,}" May 10 10:45:48.425237 containerd[1527]: time="2025-05-10T10:45:48.425200502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spcb5,Uid:13d4c605-6eb0-4856-b9f0-e613d8bf57ba,Namespace:kube-system,Attempt:0,}" May 10 10:45:50.408899 systemd-networkd[1420]: cilium_host: Link UP May 10 10:45:50.409687 systemd-networkd[1420]: cilium_net: Link UP May 10 10:45:50.410597 systemd-networkd[1420]: cilium_net: Gained carrier May 10 10:45:50.412954 systemd-networkd[1420]: cilium_host: Gained carrier May 10 10:45:50.511793 systemd-networkd[1420]: cilium_vxlan: Link UP May 10 10:45:50.511822 systemd-networkd[1420]: cilium_vxlan: Gained carrier May 10 10:45:50.854827 systemd-networkd[1420]: cilium_net: Gained IPv6LL May 10 10:45:50.922005 kernel: NET: Registered PF_ALG protocol family May 10 10:45:51.167702 systemd-networkd[1420]: cilium_host: Gained IPv6LL May 10 10:45:51.617153 systemd-networkd[1420]: cilium_vxlan: Gained IPv6LL May 10 10:45:51.985236 systemd-networkd[1420]: lxc_health: Link UP May 10 10:45:51.997076 systemd-networkd[1420]: lxc_health: Gained carrier May 10 10:45:52.508035 kernel: eth0: renamed from tmpfdb99 May 10 10:45:52.516321 systemd-networkd[1420]: lxcb55148147b0b: Link UP May 10 10:45:52.525246 systemd-networkd[1420]: lxcb55148147b0b: Gained carrier May 10 10:45:52.571043 systemd-networkd[1420]: lxcf56747de5815: Link UP May 10 10:45:52.576867 kernel: eth0: renamed from tmpc8454 May 10 10:45:52.580213 systemd-networkd[1420]: lxcf56747de5815: Gained carrier May 10 10:45:53.470937 systemd-networkd[1420]: lxc_health: Gained IPv6LL May 10 10:45:53.570792 kubelet[2760]: I0510 10:45:53.569887 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 10:45:53.599920 systemd-networkd[1420]: lxcb55148147b0b: Gained IPv6LL May 10 10:45:53.791001 systemd-networkd[1420]: lxcf56747de5815: Gained IPv6LL May 10 10:45:57.250954 containerd[1527]: time="2025-05-10T10:45:57.249564537Z" level=info msg="connecting to shim fdb999b825140d67347acfb31117c00dff162148de9ad6901884644270f2c7d5" address="unix:///run/containerd/s/eb94edccd68704d33577b81d5adff898189454e0fa5b92eefa085b2f85e52ccc" namespace=k8s.io protocol=ttrpc version=3 May 10 10:45:57.322576 containerd[1527]: time="2025-05-10T10:45:57.322515721Z" level=info msg="connecting to shim c84540bf1ba66478ea9f01e7358d64cae423d5c4ee12c0f21f39052f7cd93cc2" address="unix:///run/containerd/s/919a55c68269dc664bd64d36bf873927c5a657a6afbb888067e66d8dd1bbfe29" namespace=k8s.io protocol=ttrpc version=3 May 10 10:45:57.332231 systemd[1]: Started cri-containerd-fdb999b825140d67347acfb31117c00dff162148de9ad6901884644270f2c7d5.scope - libcontainer container fdb999b825140d67347acfb31117c00dff162148de9ad6901884644270f2c7d5. May 10 10:45:57.481326 systemd[1]: Started cri-containerd-c84540bf1ba66478ea9f01e7358d64cae423d5c4ee12c0f21f39052f7cd93cc2.scope - libcontainer container c84540bf1ba66478ea9f01e7358d64cae423d5c4ee12c0f21f39052f7cd93cc2. May 10 10:45:57.523657 containerd[1527]: time="2025-05-10T10:45:57.523588667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqpsh,Uid:9e7c4047-e125-4202-8632-e130f272ed2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdb999b825140d67347acfb31117c00dff162148de9ad6901884644270f2c7d5\"" May 10 10:45:57.528670 containerd[1527]: time="2025-05-10T10:45:57.527052620Z" level=info msg="CreateContainer within sandbox \"fdb999b825140d67347acfb31117c00dff162148de9ad6901884644270f2c7d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 10:45:57.543282 containerd[1527]: time="2025-05-10T10:45:57.543232888Z" level=info msg="Container 5f636453b3e85adf7cada1217b9d76dfb8ed7fa5b74cfdb607b74a6bb4999fdb: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:57.554666 containerd[1527]: time="2025-05-10T10:45:57.554534129Z" level=info msg="CreateContainer within sandbox \"fdb999b825140d67347acfb31117c00dff162148de9ad6901884644270f2c7d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f636453b3e85adf7cada1217b9d76dfb8ed7fa5b74cfdb607b74a6bb4999fdb\"" May 10 10:45:57.557573 containerd[1527]: time="2025-05-10T10:45:57.556539828Z" level=info msg="StartContainer for \"5f636453b3e85adf7cada1217b9d76dfb8ed7fa5b74cfdb607b74a6bb4999fdb\"" May 10 10:45:57.557573 containerd[1527]: time="2025-05-10T10:45:57.557405293Z" level=info msg="connecting to shim 5f636453b3e85adf7cada1217b9d76dfb8ed7fa5b74cfdb607b74a6bb4999fdb" address="unix:///run/containerd/s/eb94edccd68704d33577b81d5adff898189454e0fa5b92eefa085b2f85e52ccc" protocol=ttrpc version=3 May 10 10:45:57.581254 containerd[1527]: time="2025-05-10T10:45:57.581222032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spcb5,Uid:13d4c605-6eb0-4856-b9f0-e613d8bf57ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c84540bf1ba66478ea9f01e7358d64cae423d5c4ee12c0f21f39052f7cd93cc2\"" May 10 10:45:57.587993 containerd[1527]: time="2025-05-10T10:45:57.587960205Z" level=info msg="CreateContainer within sandbox \"c84540bf1ba66478ea9f01e7358d64cae423d5c4ee12c0f21f39052f7cd93cc2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 10:45:57.589871 systemd[1]: Started cri-containerd-5f636453b3e85adf7cada1217b9d76dfb8ed7fa5b74cfdb607b74a6bb4999fdb.scope - libcontainer container 5f636453b3e85adf7cada1217b9d76dfb8ed7fa5b74cfdb607b74a6bb4999fdb. May 10 10:45:57.603253 containerd[1527]: time="2025-05-10T10:45:57.602488584Z" level=info msg="Container 6d1cc6307a7ea8378e64130a41be560f1ef41121806dea5640c6cdfdee243679: CDI devices from CRI Config.CDIDevices: []" May 10 10:45:57.618015 containerd[1527]: time="2025-05-10T10:45:57.617961261Z" level=info msg="CreateContainer within sandbox \"c84540bf1ba66478ea9f01e7358d64cae423d5c4ee12c0f21f39052f7cd93cc2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d1cc6307a7ea8378e64130a41be560f1ef41121806dea5640c6cdfdee243679\"" May 10 10:45:57.619348 containerd[1527]: time="2025-05-10T10:45:57.618568859Z" level=info msg="StartContainer for \"6d1cc6307a7ea8378e64130a41be560f1ef41121806dea5640c6cdfdee243679\"" May 10 10:45:57.619811 containerd[1527]: time="2025-05-10T10:45:57.619776914Z" level=info msg="connecting to shim 6d1cc6307a7ea8378e64130a41be560f1ef41121806dea5640c6cdfdee243679" address="unix:///run/containerd/s/919a55c68269dc664bd64d36bf873927c5a657a6afbb888067e66d8dd1bbfe29" protocol=ttrpc version=3 May 10 10:45:57.646125 containerd[1527]: time="2025-05-10T10:45:57.646073183Z" level=info msg="StartContainer for \"5f636453b3e85adf7cada1217b9d76dfb8ed7fa5b74cfdb607b74a6bb4999fdb\" returns successfully" May 10 10:45:57.647898 systemd[1]: Started cri-containerd-6d1cc6307a7ea8378e64130a41be560f1ef41121806dea5640c6cdfdee243679.scope - libcontainer container 6d1cc6307a7ea8378e64130a41be560f1ef41121806dea5640c6cdfdee243679. May 10 10:45:57.707469 containerd[1527]: time="2025-05-10T10:45:57.707433498Z" level=info msg="StartContainer for \"6d1cc6307a7ea8378e64130a41be560f1ef41121806dea5640c6cdfdee243679\" returns successfully" May 10 10:45:58.526946 kubelet[2760]: I0510 10:45:58.526223 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-spcb5" podStartSLOduration=30.526192879 podStartE2EDuration="30.526192879s" podCreationTimestamp="2025-05-10 10:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:45:58.469570051 +0000 UTC m=+34.520243396" watchObservedRunningTime="2025-05-10 10:45:58.526192879 +0000 UTC m=+34.576866154" May 10 10:45:58.560240 kubelet[2760]: I0510 10:45:58.560184 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qqpsh" podStartSLOduration=29.560165556 podStartE2EDuration="29.560165556s" podCreationTimestamp="2025-05-10 10:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:45:58.527959195 +0000 UTC m=+34.578632470" watchObservedRunningTime="2025-05-10 10:45:58.560165556 +0000 UTC m=+34.610838801" May 10 10:49:46.621097 systemd[1]: Started sshd@9-172.24.4.188:22-172.24.4.1:55254.service - OpenSSH per-connection server daemon (172.24.4.1:55254). May 10 10:49:47.904497 sshd[4085]: Accepted publickey for core from 172.24.4.1 port 55254 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:49:47.910129 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:49:47.939878 systemd-logind[1498]: New session 12 of user core. May 10 10:49:47.947108 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 10:49:48.776779 sshd[4087]: Connection closed by 172.24.4.1 port 55254 May 10 10:49:48.777369 sshd-session[4085]: pam_unix(sshd:session): session closed for user core May 10 10:49:48.784778 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. May 10 10:49:48.785326 systemd[1]: sshd@9-172.24.4.188:22-172.24.4.1:55254.service: Deactivated successfully. May 10 10:49:48.787603 systemd[1]: session-12.scope: Deactivated successfully. May 10 10:49:48.791020 systemd-logind[1498]: Removed session 12. May 10 10:49:53.808585 systemd[1]: Started sshd@10-172.24.4.188:22-172.24.4.1:54362.service - OpenSSH per-connection server daemon (172.24.4.1:54362). May 10 10:49:55.188114 sshd[4101]: Accepted publickey for core from 172.24.4.1 port 54362 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:49:55.193835 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:49:55.211624 systemd-logind[1498]: New session 13 of user core. May 10 10:49:55.219145 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 10:49:56.027872 sshd[4103]: Connection closed by 172.24.4.1 port 54362 May 10 10:49:56.029306 sshd-session[4101]: pam_unix(sshd:session): session closed for user core May 10 10:49:56.037593 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. May 10 10:49:56.038676 systemd[1]: sshd@10-172.24.4.188:22-172.24.4.1:54362.service: Deactivated successfully. May 10 10:49:56.047271 systemd[1]: session-13.scope: Deactivated successfully. May 10 10:49:56.052332 systemd-logind[1498]: Removed session 13. May 10 10:50:01.058515 systemd[1]: Started sshd@11-172.24.4.188:22-172.24.4.1:54370.service - OpenSSH per-connection server daemon (172.24.4.1:54370). May 10 10:50:02.208216 sshd[4115]: Accepted publickey for core from 172.24.4.1 port 54370 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:02.213006 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:02.228185 systemd-logind[1498]: New session 14 of user core. May 10 10:50:02.235153 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 10:50:03.010571 sshd[4119]: Connection closed by 172.24.4.1 port 54370 May 10 10:50:03.011860 sshd-session[4115]: pam_unix(sshd:session): session closed for user core May 10 10:50:03.020788 systemd[1]: sshd@11-172.24.4.188:22-172.24.4.1:54370.service: Deactivated successfully. May 10 10:50:03.027309 systemd[1]: session-14.scope: Deactivated successfully. May 10 10:50:03.030049 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. May 10 10:50:03.033850 systemd-logind[1498]: Removed session 14. May 10 10:50:08.026962 systemd[1]: Started sshd@12-172.24.4.188:22-172.24.4.1:56664.service - OpenSSH per-connection server daemon (172.24.4.1:56664). May 10 10:50:09.382136 sshd[4132]: Accepted publickey for core from 172.24.4.1 port 56664 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:09.385428 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:09.399758 systemd-logind[1498]: New session 15 of user core. May 10 10:50:09.408272 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 10:50:10.261760 sshd[4134]: Connection closed by 172.24.4.1 port 56664 May 10 10:50:10.265286 sshd-session[4132]: pam_unix(sshd:session): session closed for user core May 10 10:50:10.291072 systemd[1]: sshd@12-172.24.4.188:22-172.24.4.1:56664.service: Deactivated successfully. May 10 10:50:10.297112 systemd[1]: session-15.scope: Deactivated successfully. May 10 10:50:10.298413 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. May 10 10:50:10.301430 systemd[1]: Started sshd@13-172.24.4.188:22-172.24.4.1:56676.service - OpenSSH per-connection server daemon (172.24.4.1:56676). May 10 10:50:10.304283 systemd-logind[1498]: Removed session 15. May 10 10:50:11.378431 sshd[4146]: Accepted publickey for core from 172.24.4.1 port 56676 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:11.382056 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:11.391886 systemd-logind[1498]: New session 16 of user core. May 10 10:50:11.403028 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 10:50:12.120775 sshd[4149]: Connection closed by 172.24.4.1 port 56676 May 10 10:50:12.123256 sshd-session[4146]: pam_unix(sshd:session): session closed for user core May 10 10:50:12.133963 systemd[1]: sshd@13-172.24.4.188:22-172.24.4.1:56676.service: Deactivated successfully. May 10 10:50:12.137781 systemd[1]: session-16.scope: Deactivated successfully. May 10 10:50:12.141051 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. May 10 10:50:12.143244 systemd[1]: Started sshd@14-172.24.4.188:22-172.24.4.1:56682.service - OpenSSH per-connection server daemon (172.24.4.1:56682). May 10 10:50:12.145281 systemd-logind[1498]: Removed session 16. May 10 10:50:13.693126 sshd[4158]: Accepted publickey for core from 172.24.4.1 port 56682 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:13.695990 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:13.702411 systemd-logind[1498]: New session 17 of user core. May 10 10:50:13.710957 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 10:50:14.567520 sshd[4161]: Connection closed by 172.24.4.1 port 56682 May 10 10:50:14.569936 sshd-session[4158]: pam_unix(sshd:session): session closed for user core May 10 10:50:14.578904 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. May 10 10:50:14.580648 systemd[1]: sshd@14-172.24.4.188:22-172.24.4.1:56682.service: Deactivated successfully. May 10 10:50:14.586878 systemd[1]: session-17.scope: Deactivated successfully. May 10 10:50:14.592390 systemd-logind[1498]: Removed session 17. May 10 10:50:17.892181 containerd[1527]: time="2025-05-10T10:50:17.891489740Z" level=warning msg="container event discarded" container=ccad955ec49a4a7b38fad6edba168a37046277b6e139a2c0366e3a66bd16f9d7 type=CONTAINER_CREATED_EVENT May 10 10:50:17.892181 containerd[1527]: time="2025-05-10T10:50:17.892035092Z" level=warning msg="container event discarded" container=ccad955ec49a4a7b38fad6edba168a37046277b6e139a2c0366e3a66bd16f9d7 type=CONTAINER_STARTED_EVENT May 10 10:50:17.910563 containerd[1527]: time="2025-05-10T10:50:17.910415211Z" level=warning msg="container event discarded" container=dfb747c1887be96a7e97e2792fbc4b62c001618e480a83b99ab7e95bbae214d5 type=CONTAINER_CREATED_EVENT May 10 10:50:17.910563 containerd[1527]: time="2025-05-10T10:50:17.910516391Z" level=warning msg="container event discarded" container=dfb747c1887be96a7e97e2792fbc4b62c001618e480a83b99ab7e95bbae214d5 type=CONTAINER_STARTED_EVENT May 10 10:50:17.940129 containerd[1527]: time="2025-05-10T10:50:17.939868512Z" level=warning msg="container event discarded" container=394e55c993cafa7829647ebe63ff019946bc9d99b7974b2a488a1202d88854e4 type=CONTAINER_CREATED_EVENT May 10 10:50:17.940129 containerd[1527]: time="2025-05-10T10:50:17.940028242Z" level=warning msg="container event discarded" container=39532e1131cda585cfc5625570a81a5ebd0c88ab1de0bead866e8002d79aeddd type=CONTAINER_CREATED_EVENT May 10 10:50:17.940129 containerd[1527]: time="2025-05-10T10:50:17.940054110Z" level=warning msg="container event discarded" container=39532e1131cda585cfc5625570a81a5ebd0c88ab1de0bead866e8002d79aeddd type=CONTAINER_STARTED_EVENT May 10 10:50:17.971604 containerd[1527]: time="2025-05-10T10:50:17.971411132Z" level=warning msg="container event discarded" container=d2307fbcf3de9bcfaab6ad32f98221c097d38cfff457fd5436484604e8e76490 type=CONTAINER_CREATED_EVENT May 10 10:50:17.988890 containerd[1527]: time="2025-05-10T10:50:17.988768092Z" level=warning msg="container event discarded" container=bc937f218614adf9f25523a51318b76c2ce955ab070ac25e5d4b54f4adcf6bd4 type=CONTAINER_CREATED_EVENT May 10 10:50:18.065433 containerd[1527]: time="2025-05-10T10:50:18.065311540Z" level=warning msg="container event discarded" container=394e55c993cafa7829647ebe63ff019946bc9d99b7974b2a488a1202d88854e4 type=CONTAINER_STARTED_EVENT May 10 10:50:18.088827 containerd[1527]: time="2025-05-10T10:50:18.088736202Z" level=warning msg="container event discarded" container=d2307fbcf3de9bcfaab6ad32f98221c097d38cfff457fd5436484604e8e76490 type=CONTAINER_STARTED_EVENT May 10 10:50:18.148176 containerd[1527]: time="2025-05-10T10:50:18.147819126Z" level=warning msg="container event discarded" container=bc937f218614adf9f25523a51318b76c2ce955ab070ac25e5d4b54f4adcf6bd4 type=CONTAINER_STARTED_EVENT May 10 10:50:19.592315 systemd[1]: Started sshd@15-172.24.4.188:22-172.24.4.1:37436.service - OpenSSH per-connection server daemon (172.24.4.1:37436). May 10 10:50:20.908532 sshd[4173]: Accepted publickey for core from 172.24.4.1 port 37436 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:20.911389 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:20.926586 systemd-logind[1498]: New session 18 of user core. May 10 10:50:20.931052 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 10:50:21.933355 sshd[4175]: Connection closed by 172.24.4.1 port 37436 May 10 10:50:21.936214 sshd-session[4173]: pam_unix(sshd:session): session closed for user core May 10 10:50:21.944059 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. May 10 10:50:21.945389 systemd[1]: sshd@15-172.24.4.188:22-172.24.4.1:37436.service: Deactivated successfully. May 10 10:50:21.950626 systemd[1]: session-18.scope: Deactivated successfully. May 10 10:50:21.957204 systemd-logind[1498]: Removed session 18. May 10 10:50:26.952043 systemd[1]: Started sshd@16-172.24.4.188:22-172.24.4.1:35172.service - OpenSSH per-connection server daemon (172.24.4.1:35172). May 10 10:50:28.422784 sshd[4188]: Accepted publickey for core from 172.24.4.1 port 35172 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:28.429887 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:28.447822 systemd-logind[1498]: New session 19 of user core. May 10 10:50:28.455062 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 10:50:29.397419 sshd[4190]: Connection closed by 172.24.4.1 port 35172 May 10 10:50:29.398153 sshd-session[4188]: pam_unix(sshd:session): session closed for user core May 10 10:50:29.410773 systemd[1]: sshd@16-172.24.4.188:22-172.24.4.1:35172.service: Deactivated successfully. May 10 10:50:29.414720 systemd[1]: session-19.scope: Deactivated successfully. May 10 10:50:29.418630 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. May 10 10:50:29.420966 systemd[1]: Started sshd@17-172.24.4.188:22-172.24.4.1:35176.service - OpenSSH per-connection server daemon (172.24.4.1:35176). May 10 10:50:29.425281 systemd-logind[1498]: Removed session 19. May 10 10:50:30.833416 containerd[1527]: time="2025-05-10T10:50:30.833218073Z" level=warning msg="container event discarded" container=34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282 type=CONTAINER_CREATED_EVENT May 10 10:50:30.833416 containerd[1527]: time="2025-05-10T10:50:30.833347947Z" level=warning msg="container event discarded" container=34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282 type=CONTAINER_STARTED_EVENT May 10 10:50:30.847132 containerd[1527]: time="2025-05-10T10:50:30.846988958Z" level=warning msg="container event discarded" container=49a49212e6425194b17dff58cc90d5f50aaef1ad586febaf136bb221f4cc18e2 type=CONTAINER_CREATED_EVENT May 10 10:50:30.847132 containerd[1527]: time="2025-05-10T10:50:30.847077364Z" level=warning msg="container event discarded" container=49a49212e6425194b17dff58cc90d5f50aaef1ad586febaf136bb221f4cc18e2 type=CONTAINER_STARTED_EVENT May 10 10:50:30.889069 containerd[1527]: time="2025-05-10T10:50:30.888941323Z" level=warning msg="container event discarded" container=dccf7dfdbd9877fa297b47539852c605a1c51058b15a99010dde3737e1f6fa7d type=CONTAINER_CREATED_EVENT May 10 10:50:30.990084 sshd[4200]: Accepted publickey for core from 172.24.4.1 port 35176 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:30.995583 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:31.009845 systemd-logind[1498]: New session 20 of user core. May 10 10:50:31.020021 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 10:50:31.050531 containerd[1527]: time="2025-05-10T10:50:31.050406141Z" level=warning msg="container event discarded" container=dccf7dfdbd9877fa297b47539852c605a1c51058b15a99010dde3737e1f6fa7d type=CONTAINER_STARTED_EVENT May 10 10:50:31.101454 containerd[1527]: time="2025-05-10T10:50:31.101152264Z" level=warning msg="container event discarded" container=ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78 type=CONTAINER_CREATED_EVENT May 10 10:50:31.101454 containerd[1527]: time="2025-05-10T10:50:31.101234570Z" level=warning msg="container event discarded" container=ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78 type=CONTAINER_STARTED_EVENT May 10 10:50:31.934782 sshd[4203]: Connection closed by 172.24.4.1 port 35176 May 10 10:50:31.936019 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 10 10:50:31.953225 systemd[1]: sshd@17-172.24.4.188:22-172.24.4.1:35176.service: Deactivated successfully. May 10 10:50:31.960193 systemd[1]: session-20.scope: Deactivated successfully. May 10 10:50:31.963964 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. May 10 10:50:31.971174 systemd[1]: Started sshd@18-172.24.4.188:22-172.24.4.1:35184.service - OpenSSH per-connection server daemon (172.24.4.1:35184). May 10 10:50:31.975336 systemd-logind[1498]: Removed session 20. May 10 10:50:33.485496 sshd[4214]: Accepted publickey for core from 172.24.4.1 port 35184 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:33.489820 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:33.502754 systemd-logind[1498]: New session 21 of user core. May 10 10:50:33.512155 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 10:50:35.479602 sshd[4217]: Connection closed by 172.24.4.1 port 35184 May 10 10:50:35.479802 sshd-session[4214]: pam_unix(sshd:session): session closed for user core May 10 10:50:35.500391 systemd[1]: sshd@18-172.24.4.188:22-172.24.4.1:35184.service: Deactivated successfully. May 10 10:50:35.506093 systemd[1]: session-21.scope: Deactivated successfully. May 10 10:50:35.510274 systemd-logind[1498]: Session 21 logged out. Waiting for processes to exit. May 10 10:50:35.517300 systemd[1]: Started sshd@19-172.24.4.188:22-172.24.4.1:42398.service - OpenSSH per-connection server daemon (172.24.4.1:42398). May 10 10:50:35.525979 systemd-logind[1498]: Removed session 21. May 10 10:50:36.973728 sshd[4234]: Accepted publickey for core from 172.24.4.1 port 42398 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:36.977083 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:36.992530 systemd-logind[1498]: New session 22 of user core. May 10 10:50:37.013386 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 10:50:37.936631 sshd[4237]: Connection closed by 172.24.4.1 port 42398 May 10 10:50:37.938096 sshd-session[4234]: pam_unix(sshd:session): session closed for user core May 10 10:50:37.956417 systemd[1]: sshd@19-172.24.4.188:22-172.24.4.1:42398.service: Deactivated successfully. May 10 10:50:37.964073 systemd[1]: session-22.scope: Deactivated successfully. May 10 10:50:37.969737 systemd-logind[1498]: Session 22 logged out. Waiting for processes to exit. May 10 10:50:37.973433 systemd[1]: Started sshd@20-172.24.4.188:22-172.24.4.1:42408.service - OpenSSH per-connection server daemon (172.24.4.1:42408). May 10 10:50:37.977209 systemd-logind[1498]: Removed session 22. May 10 10:50:39.221095 sshd[4246]: Accepted publickey for core from 172.24.4.1 port 42408 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:39.224258 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:39.238775 systemd-logind[1498]: New session 23 of user core. May 10 10:50:39.249040 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 10:50:39.936426 sshd[4249]: Connection closed by 172.24.4.1 port 42408 May 10 10:50:39.937774 sshd-session[4246]: pam_unix(sshd:session): session closed for user core May 10 10:50:39.946610 systemd[1]: sshd@20-172.24.4.188:22-172.24.4.1:42408.service: Deactivated successfully. May 10 10:50:39.951257 systemd[1]: session-23.scope: Deactivated successfully. May 10 10:50:39.956296 systemd-logind[1498]: Session 23 logged out. Waiting for processes to exit. May 10 10:50:39.960124 systemd-logind[1498]: Removed session 23. May 10 10:50:42.047298 containerd[1527]: time="2025-05-10T10:50:42.047116459Z" level=warning msg="container event discarded" container=862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111 type=CONTAINER_CREATED_EVENT May 10 10:50:42.174348 containerd[1527]: time="2025-05-10T10:50:42.174194572Z" level=warning msg="container event discarded" container=862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111 type=CONTAINER_STARTED_EVENT May 10 10:50:43.488998 containerd[1527]: time="2025-05-10T10:50:43.488880882Z" level=warning msg="container event discarded" container=862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111 type=CONTAINER_STOPPED_EVENT May 10 10:50:44.275785 containerd[1527]: time="2025-05-10T10:50:44.275590283Z" level=warning msg="container event discarded" container=4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586 type=CONTAINER_CREATED_EVENT May 10 10:50:44.361065 containerd[1527]: time="2025-05-10T10:50:44.360804362Z" level=warning msg="container event discarded" container=4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586 type=CONTAINER_STARTED_EVENT May 10 10:50:44.451469 containerd[1527]: time="2025-05-10T10:50:44.451280525Z" level=warning msg="container event discarded" container=4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586 type=CONTAINER_STOPPED_EVENT May 10 10:50:44.967125 systemd[1]: Started sshd@21-172.24.4.188:22-172.24.4.1:54504.service - OpenSSH per-connection server daemon (172.24.4.1:54504). May 10 10:50:45.316322 containerd[1527]: time="2025-05-10T10:50:45.316019764Z" level=warning msg="container event discarded" container=dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2 type=CONTAINER_CREATED_EVENT May 10 10:50:45.421097 containerd[1527]: time="2025-05-10T10:50:45.420894383Z" level=warning msg="container event discarded" container=dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2 type=CONTAINER_STARTED_EVENT May 10 10:50:45.534399 containerd[1527]: time="2025-05-10T10:50:45.534257096Z" level=warning msg="container event discarded" container=dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2 type=CONTAINER_STOPPED_EVENT May 10 10:50:46.211855 containerd[1527]: time="2025-05-10T10:50:46.211657303Z" level=warning msg="container event discarded" container=4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982 type=CONTAINER_CREATED_EVENT May 10 10:50:46.278722 containerd[1527]: time="2025-05-10T10:50:46.278520273Z" level=warning msg="container event discarded" container=7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f type=CONTAINER_CREATED_EVENT May 10 10:50:46.285749 sshd[4263]: Accepted publickey for core from 172.24.4.1 port 54504 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:46.289104 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:46.302376 systemd-logind[1498]: New session 24 of user core. May 10 10:50:46.311035 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 10:50:46.332959 containerd[1527]: time="2025-05-10T10:50:46.332832355Z" level=warning msg="container event discarded" container=4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982 type=CONTAINER_STARTED_EVENT May 10 10:50:46.368443 containerd[1527]: time="2025-05-10T10:50:46.368323785Z" level=warning msg="container event discarded" container=7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f type=CONTAINER_STARTED_EVENT May 10 10:50:46.679410 containerd[1527]: time="2025-05-10T10:50:46.679288478Z" level=warning msg="container event discarded" container=7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f type=CONTAINER_STOPPED_EVENT May 10 10:50:47.011414 sshd[4265]: Connection closed by 172.24.4.1 port 54504 May 10 10:50:47.011069 sshd-session[4263]: pam_unix(sshd:session): session closed for user core May 10 10:50:47.017678 systemd[1]: sshd@21-172.24.4.188:22-172.24.4.1:54504.service: Deactivated successfully. May 10 10:50:47.024060 systemd[1]: session-24.scope: Deactivated successfully. May 10 10:50:47.028459 systemd-logind[1498]: Session 24 logged out. Waiting for processes to exit. May 10 10:50:47.031253 systemd-logind[1498]: Removed session 24. May 10 10:50:47.360486 containerd[1527]: time="2025-05-10T10:50:47.360185402Z" level=warning msg="container event discarded" container=98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253 type=CONTAINER_CREATED_EVENT May 10 10:50:47.588883 containerd[1527]: time="2025-05-10T10:50:47.588683349Z" level=warning msg="container event discarded" container=98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253 type=CONTAINER_STARTED_EVENT May 10 10:50:52.034999 systemd[1]: Started sshd@22-172.24.4.188:22-172.24.4.1:54508.service - OpenSSH per-connection server daemon (172.24.4.1:54508). May 10 10:50:53.326821 sshd[4279]: Accepted publickey for core from 172.24.4.1 port 54508 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:50:53.330377 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:50:53.343061 systemd-logind[1498]: New session 25 of user core. May 10 10:50:53.359094 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 10:50:54.207300 sshd[4281]: Connection closed by 172.24.4.1 port 54508 May 10 10:50:54.208652 sshd-session[4279]: pam_unix(sshd:session): session closed for user core May 10 10:50:54.219310 systemd[1]: sshd@22-172.24.4.188:22-172.24.4.1:54508.service: Deactivated successfully. May 10 10:50:54.224534 systemd[1]: session-25.scope: Deactivated successfully. May 10 10:50:54.227017 systemd-logind[1498]: Session 25 logged out. Waiting for processes to exit. May 10 10:50:54.230085 systemd-logind[1498]: Removed session 25. May 10 10:50:57.534472 containerd[1527]: time="2025-05-10T10:50:57.534318248Z" level=warning msg="container event discarded" container=fdb999b825140d67347acfb31117c00dff162148de9ad6901884644270f2c7d5 type=CONTAINER_CREATED_EVENT May 10 10:50:57.534472 containerd[1527]: time="2025-05-10T10:50:57.534440558Z" level=warning msg="container event discarded" container=fdb999b825140d67347acfb31117c00dff162148de9ad6901884644270f2c7d5 type=CONTAINER_STARTED_EVENT May 10 10:50:57.563943 containerd[1527]: time="2025-05-10T10:50:57.563777052Z" level=warning msg="container event discarded" container=5f636453b3e85adf7cada1217b9d76dfb8ed7fa5b74cfdb607b74a6bb4999fdb type=CONTAINER_CREATED_EVENT May 10 10:50:57.591469 containerd[1527]: time="2025-05-10T10:50:57.591299273Z" level=warning msg="container event discarded" container=c84540bf1ba66478ea9f01e7358d64cae423d5c4ee12c0f21f39052f7cd93cc2 type=CONTAINER_CREATED_EVENT May 10 10:50:57.591469 containerd[1527]: time="2025-05-10T10:50:57.591386507Z" level=warning msg="container event discarded" container=c84540bf1ba66478ea9f01e7358d64cae423d5c4ee12c0f21f39052f7cd93cc2 type=CONTAINER_STARTED_EVENT May 10 10:50:57.626151 containerd[1527]: time="2025-05-10T10:50:57.625917639Z" level=warning msg="container event discarded" container=6d1cc6307a7ea8378e64130a41be560f1ef41121806dea5640c6cdfdee243679 type=CONTAINER_CREATED_EVENT May 10 10:50:57.651842 containerd[1527]: time="2025-05-10T10:50:57.651670870Z" level=warning msg="container event discarded" container=5f636453b3e85adf7cada1217b9d76dfb8ed7fa5b74cfdb607b74a6bb4999fdb type=CONTAINER_STARTED_EVENT May 10 10:50:57.716535 containerd[1527]: time="2025-05-10T10:50:57.716345030Z" level=warning msg="container event discarded" container=6d1cc6307a7ea8378e64130a41be560f1ef41121806dea5640c6cdfdee243679 type=CONTAINER_STARTED_EVENT May 10 10:50:59.230808 systemd[1]: Started sshd@23-172.24.4.188:22-172.24.4.1:33244.service - OpenSSH per-connection server daemon (172.24.4.1:33244). May 10 10:51:00.347268 sshd[4293]: Accepted publickey for core from 172.24.4.1 port 33244 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:51:00.350955 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:51:00.364777 systemd-logind[1498]: New session 26 of user core. May 10 10:51:00.373158 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 10:51:01.083421 sshd[4295]: Connection closed by 172.24.4.1 port 33244 May 10 10:51:01.086662 sshd-session[4293]: pam_unix(sshd:session): session closed for user core May 10 10:51:01.103882 systemd[1]: sshd@23-172.24.4.188:22-172.24.4.1:33244.service: Deactivated successfully. May 10 10:51:01.110035 systemd[1]: session-26.scope: Deactivated successfully. May 10 10:51:01.113833 systemd-logind[1498]: Session 26 logged out. Waiting for processes to exit. May 10 10:51:01.120133 systemd[1]: Started sshd@24-172.24.4.188:22-172.24.4.1:33250.service - OpenSSH per-connection server daemon (172.24.4.1:33250). May 10 10:51:01.124027 systemd-logind[1498]: Removed session 26. May 10 10:51:02.401483 sshd[4306]: Accepted publickey for core from 172.24.4.1 port 33250 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:51:02.404923 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:51:02.419272 systemd-logind[1498]: New session 27 of user core. May 10 10:51:02.430128 systemd[1]: Started session-27.scope - Session 27 of User core. May 10 10:51:05.364372 containerd[1527]: time="2025-05-10T10:51:05.363911214Z" level=info msg="StopContainer for \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" with timeout 30 (s)" May 10 10:51:05.366714 containerd[1527]: time="2025-05-10T10:51:05.366658439Z" level=info msg="Stop container \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" with signal terminated" May 10 10:51:05.387215 systemd[1]: cri-containerd-4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982.scope: Deactivated successfully. May 10 10:51:05.387933 systemd[1]: cri-containerd-4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982.scope: Consumed 1.211s CPU time, 26.9M memory peak, 4K written to disk. May 10 10:51:05.393367 containerd[1527]: time="2025-05-10T10:51:05.393323974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" id:\"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" pid:3321 exited_at:{seconds:1746874265 nanos:392346219}" May 10 10:51:05.393839 containerd[1527]: time="2025-05-10T10:51:05.393452415Z" level=info msg="received exit event container_id:\"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" id:\"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" pid:3321 exited_at:{seconds:1746874265 nanos:392346219}" May 10 10:51:05.406095 containerd[1527]: time="2025-05-10T10:51:05.405959072Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 10:51:05.412113 containerd[1527]: time="2025-05-10T10:51:05.411947799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" id:\"8c6061ab9127288068a10f5774a8af13c20162597da59eaabcb6d7d898495276\" pid:4336 exited_at:{seconds:1746874265 nanos:410554585}" May 10 10:51:05.415798 containerd[1527]: time="2025-05-10T10:51:05.415603339Z" level=info msg="StopContainer for \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" with timeout 2 (s)" May 10 10:51:05.416052 containerd[1527]: time="2025-05-10T10:51:05.416028286Z" level=info msg="Stop container \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" with signal terminated" May 10 10:51:05.432050 systemd-networkd[1420]: lxc_health: Link DOWN May 10 10:51:05.432058 systemd-networkd[1420]: lxc_health: Lost carrier May 10 10:51:05.455796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982-rootfs.mount: Deactivated successfully. May 10 10:51:05.463968 systemd[1]: cri-containerd-98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253.scope: Deactivated successfully. May 10 10:51:05.464288 systemd[1]: cri-containerd-98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253.scope: Consumed 11.708s CPU time, 124.5M memory peak, 136K read from disk, 13.3M written to disk. May 10 10:51:05.474751 containerd[1527]: time="2025-05-10T10:51:05.474655443Z" level=info msg="received exit event container_id:\"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" id:\"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" pid:3389 exited_at:{seconds:1746874265 nanos:473502760}" May 10 10:51:05.475817 containerd[1527]: time="2025-05-10T10:51:05.475774713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" id:\"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" pid:3389 exited_at:{seconds:1746874265 nanos:473502760}" May 10 10:51:05.495935 containerd[1527]: time="2025-05-10T10:51:05.495845093Z" level=info msg="StopContainer for \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" returns successfully" May 10 10:51:05.497726 containerd[1527]: time="2025-05-10T10:51:05.496772984Z" level=info msg="StopPodSandbox for \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\"" May 10 10:51:05.497726 containerd[1527]: time="2025-05-10T10:51:05.496860609Z" level=info msg="Container to stop \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:51:05.508539 systemd[1]: cri-containerd-ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78.scope: Deactivated successfully. May 10 10:51:05.511484 containerd[1527]: time="2025-05-10T10:51:05.511358993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" id:\"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" pid:2991 exit_status:137 exited_at:{seconds:1746874265 nanos:510750011}" May 10 10:51:05.522645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253-rootfs.mount: Deactivated successfully. May 10 10:51:05.550243 containerd[1527]: time="2025-05-10T10:51:05.550192350Z" level=info msg="StopContainer for \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" returns successfully" May 10 10:51:05.555011 containerd[1527]: time="2025-05-10T10:51:05.554963312Z" level=info msg="StopPodSandbox for \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\"" May 10 10:51:05.555114 containerd[1527]: time="2025-05-10T10:51:05.555056246Z" level=info msg="Container to stop \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:51:05.555114 containerd[1527]: time="2025-05-10T10:51:05.555089178Z" level=info msg="Container to stop \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:51:05.555114 containerd[1527]: time="2025-05-10T10:51:05.555105529Z" level=info msg="Container to stop \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:51:05.555480 containerd[1527]: time="2025-05-10T10:51:05.555117361Z" level=info msg="Container to stop \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:51:05.555480 containerd[1527]: time="2025-05-10T10:51:05.555132790Z" level=info msg="Container to stop \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:51:05.571190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78-rootfs.mount: Deactivated successfully. May 10 10:51:05.574551 systemd[1]: cri-containerd-34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282.scope: Deactivated successfully. May 10 10:51:05.586765 containerd[1527]: time="2025-05-10T10:51:05.586664045Z" level=info msg="shim disconnected" id=ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78 namespace=k8s.io May 10 10:51:05.587782 containerd[1527]: time="2025-05-10T10:51:05.587559185Z" level=warning msg="cleaning up after shim disconnected" id=ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78 namespace=k8s.io May 10 10:51:05.587939 containerd[1527]: time="2025-05-10T10:51:05.587778446Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 10:51:05.606124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282-rootfs.mount: Deactivated successfully. May 10 10:51:05.618905 containerd[1527]: time="2025-05-10T10:51:05.617907520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" id:\"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" pid:2898 exit_status:137 exited_at:{seconds:1746874265 nanos:575228718}" May 10 10:51:05.618905 containerd[1527]: time="2025-05-10T10:51:05.618889482Z" level=info msg="received exit event sandbox_id:\"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" exit_status:137 exited_at:{seconds:1746874265 nanos:510750011}" May 10 10:51:05.619275 containerd[1527]: time="2025-05-10T10:51:05.619244869Z" level=info msg="TearDown network for sandbox \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" successfully" May 10 10:51:05.619275 containerd[1527]: time="2025-05-10T10:51:05.619270407Z" level=info msg="StopPodSandbox for \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" returns successfully" May 10 10:51:05.622368 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78-shm.mount: Deactivated successfully. May 10 10:51:05.626018 containerd[1527]: time="2025-05-10T10:51:05.625610063Z" level=info msg="received exit event sandbox_id:\"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" exit_status:137 exited_at:{seconds:1746874265 nanos:575228718}" May 10 10:51:05.628387 containerd[1527]: time="2025-05-10T10:51:05.628350234Z" level=info msg="shim disconnected" id=34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282 namespace=k8s.io May 10 10:51:05.628387 containerd[1527]: time="2025-05-10T10:51:05.628383216Z" level=warning msg="cleaning up after shim disconnected" id=34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282 namespace=k8s.io May 10 10:51:05.628485 containerd[1527]: time="2025-05-10T10:51:05.628393766Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 10:51:05.634161 containerd[1527]: time="2025-05-10T10:51:05.634104471Z" level=info msg="TearDown network for sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" successfully" May 10 10:51:05.634161 containerd[1527]: time="2025-05-10T10:51:05.634134968Z" level=info msg="StopPodSandbox for \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" returns successfully" May 10 10:51:05.733358 kubelet[2760]: I0510 10:51:05.733207 2760 scope.go:117] "RemoveContainer" containerID="4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982" May 10 10:51:05.740016 kubelet[2760]: I0510 10:51:05.738597 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-etc-cni-netd\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.740016 kubelet[2760]: I0510 10:51:05.738997 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.740016 kubelet[2760]: I0510 10:51:05.739061 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cni-path\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.740016 kubelet[2760]: I0510 10:51:05.739178 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-cilium-config-path\") pod \"fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f\" (UID: \"fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f\") " May 10 10:51:05.740016 kubelet[2760]: I0510 10:51:05.739237 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-xtables-lock\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.740016 kubelet[2760]: I0510 10:51:05.739294 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-run\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.743219 kubelet[2760]: I0510 10:51:05.739340 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-hostproc\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.743219 kubelet[2760]: I0510 10:51:05.739398 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-host-proc-sys-net\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.743219 kubelet[2760]: I0510 10:51:05.739493 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-bpf-maps\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.743219 kubelet[2760]: I0510 10:51:05.739539 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-lib-modules\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.743219 kubelet[2760]: I0510 10:51:05.739593 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-cgroup\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.743219 kubelet[2760]: I0510 10:51:05.739641 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-config-path\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.745224 kubelet[2760]: I0510 10:51:05.742222 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vggwj\" (UniqueName: \"kubernetes.io/projected/58b6bae0-f13b-41ea-adc6-b527c074bb1a-kube-api-access-vggwj\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.745224 kubelet[2760]: I0510 10:51:05.742585 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-host-proc-sys-kernel\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.745224 kubelet[2760]: I0510 10:51:05.742649 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pd7q\" (UniqueName: \"kubernetes.io/projected/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-kube-api-access-5pd7q\") pod \"fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f\" (UID: \"fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f\") " May 10 10:51:05.745224 kubelet[2760]: I0510 10:51:05.742746 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58b6bae0-f13b-41ea-adc6-b527c074bb1a-clustermesh-secrets\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.745224 kubelet[2760]: I0510 10:51:05.742796 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58b6bae0-f13b-41ea-adc6-b527c074bb1a-hubble-tls\") pod \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\" (UID: \"58b6bae0-f13b-41ea-adc6-b527c074bb1a\") " May 10 10:51:05.745224 kubelet[2760]: I0510 10:51:05.742942 2760 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-etc-cni-netd\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.746516 kubelet[2760]: I0510 10:51:05.742797 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-hostproc" (OuterVolumeSpecName: "hostproc") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.746516 kubelet[2760]: I0510 10:51:05.744130 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cni-path" (OuterVolumeSpecName: "cni-path") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.746516 kubelet[2760]: I0510 10:51:05.744569 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.746516 kubelet[2760]: I0510 10:51:05.745778 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.749219 kubelet[2760]: I0510 10:51:05.748970 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.749219 kubelet[2760]: I0510 10:51:05.749121 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.749219 kubelet[2760]: I0510 10:51:05.749174 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.749996 kubelet[2760]: I0510 10:51:05.749241 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.755781 kubelet[2760]: I0510 10:51:05.755406 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 10:51:05.756761 kubelet[2760]: I0510 10:51:05.756152 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b6bae0-f13b-41ea-adc6-b527c074bb1a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 10:51:05.758831 containerd[1527]: time="2025-05-10T10:51:05.758536574Z" level=info msg="RemoveContainer for \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\"" May 10 10:51:05.777079 kubelet[2760]: I0510 10:51:05.776948 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-kube-api-access-5pd7q" (OuterVolumeSpecName: "kube-api-access-5pd7q") pod "fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f" (UID: "fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f"). InnerVolumeSpecName "kube-api-access-5pd7q". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 10:51:05.779590 kubelet[2760]: I0510 10:51:05.778813 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b6bae0-f13b-41ea-adc6-b527c074bb1a-kube-api-access-vggwj" (OuterVolumeSpecName: "kube-api-access-vggwj") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "kube-api-access-vggwj". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 10:51:05.782604 kubelet[2760]: I0510 10:51:05.782420 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b6bae0-f13b-41ea-adc6-b527c074bb1a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 10 10:51:05.786408 kubelet[2760]: I0510 10:51:05.786378 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "58b6bae0-f13b-41ea-adc6-b527c074bb1a" (UID: "58b6bae0-f13b-41ea-adc6-b527c074bb1a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 10 10:51:05.796948 kubelet[2760]: I0510 10:51:05.796896 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f" (UID: "fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 10 10:51:05.801858 containerd[1527]: time="2025-05-10T10:51:05.801819611Z" level=info msg="RemoveContainer for \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" returns successfully" May 10 10:51:05.803763 systemd[1]: Removed slice kubepods-burstable-pod58b6bae0_f13b_41ea_adc6_b527c074bb1a.slice - libcontainer container kubepods-burstable-pod58b6bae0_f13b_41ea_adc6_b527c074bb1a.slice. May 10 10:51:05.803879 systemd[1]: kubepods-burstable-pod58b6bae0_f13b_41ea_adc6_b527c074bb1a.slice: Consumed 11.870s CPU time, 124.9M memory peak, 136K read from disk, 13.3M written to disk. May 10 10:51:05.807260 kubelet[2760]: I0510 10:51:05.807217 2760 scope.go:117] "RemoveContainer" containerID="4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982" May 10 10:51:05.808161 containerd[1527]: time="2025-05-10T10:51:05.807781317Z" level=error msg="ContainerStatus for \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\": not found" May 10 10:51:05.808285 kubelet[2760]: E0510 10:51:05.807994 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\": not found" containerID="4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982" May 10 10:51:05.808285 kubelet[2760]: I0510 10:51:05.808056 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982"} err="failed to get container status \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d75b14cef3c14a0daab1e90047ff7553ddabd5d279a28be8f9d3ad9b4c3b982\": not found" May 10 10:51:05.808285 kubelet[2760]: I0510 10:51:05.808224 2760 scope.go:117] "RemoveContainer" containerID="98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253" May 10 10:51:05.814399 containerd[1527]: time="2025-05-10T10:51:05.813583634Z" level=info msg="RemoveContainer for \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\"" May 10 10:51:05.844096 kubelet[2760]: I0510 10:51:05.844048 2760 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-cilium-config-path\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844096 kubelet[2760]: I0510 10:51:05.844086 2760 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-xtables-lock\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844096 kubelet[2760]: I0510 10:51:05.844101 2760 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-run\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844096 kubelet[2760]: I0510 10:51:05.844113 2760 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-hostproc\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844361 kubelet[2760]: I0510 10:51:05.844124 2760 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-host-proc-sys-net\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844361 kubelet[2760]: I0510 10:51:05.844135 2760 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-bpf-maps\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844361 kubelet[2760]: I0510 10:51:05.844147 2760 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-lib-modules\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844361 kubelet[2760]: I0510 10:51:05.844157 2760 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-cgroup\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844361 kubelet[2760]: I0510 10:51:05.844167 2760 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cilium-config-path\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844361 kubelet[2760]: I0510 10:51:05.844178 2760 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vggwj\" (UniqueName: \"kubernetes.io/projected/58b6bae0-f13b-41ea-adc6-b527c074bb1a-kube-api-access-vggwj\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844361 kubelet[2760]: I0510 10:51:05.844189 2760 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-host-proc-sys-kernel\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844570 kubelet[2760]: I0510 10:51:05.844199 2760 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5pd7q\" (UniqueName: \"kubernetes.io/projected/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f-kube-api-access-5pd7q\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844570 kubelet[2760]: I0510 10:51:05.844210 2760 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58b6bae0-f13b-41ea-adc6-b527c074bb1a-clustermesh-secrets\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844570 kubelet[2760]: I0510 10:51:05.844271 2760 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58b6bae0-f13b-41ea-adc6-b527c074bb1a-hubble-tls\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.844570 kubelet[2760]: I0510 10:51:05.844283 2760 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58b6bae0-f13b-41ea-adc6-b527c074bb1a-cni-path\") on node \"ci-4330-0-0-n-4c6f505fd4.novalocal\" DevicePath \"\"" May 10 10:51:05.862396 containerd[1527]: time="2025-05-10T10:51:05.862292010Z" level=info msg="RemoveContainer for \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" returns successfully" May 10 10:51:05.862578 kubelet[2760]: I0510 10:51:05.862528 2760 scope.go:117] "RemoveContainer" containerID="7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f" May 10 10:51:05.864122 containerd[1527]: time="2025-05-10T10:51:05.864088100Z" level=info msg="RemoveContainer for \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\"" May 10 10:51:05.931758 containerd[1527]: time="2025-05-10T10:51:05.930655486Z" level=info msg="RemoveContainer for \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\" returns successfully" May 10 10:51:05.932680 kubelet[2760]: I0510 10:51:05.932268 2760 scope.go:117] "RemoveContainer" containerID="dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2" May 10 10:51:05.942798 containerd[1527]: time="2025-05-10T10:51:05.942676482Z" level=info msg="RemoveContainer for \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\"" May 10 10:51:05.994650 containerd[1527]: time="2025-05-10T10:51:05.994415645Z" level=info msg="RemoveContainer for \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\" returns successfully" May 10 10:51:05.995010 kubelet[2760]: I0510 10:51:05.994969 2760 scope.go:117] "RemoveContainer" containerID="4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586" May 10 10:51:05.999751 containerd[1527]: time="2025-05-10T10:51:05.999614410Z" level=info msg="RemoveContainer for \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\"" May 10 10:51:06.008608 containerd[1527]: time="2025-05-10T10:51:06.008513458Z" level=info msg="RemoveContainer for \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\" returns successfully" May 10 10:51:06.009962 kubelet[2760]: I0510 10:51:06.008928 2760 scope.go:117] "RemoveContainer" containerID="862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111" May 10 10:51:06.012796 containerd[1527]: time="2025-05-10T10:51:06.012672903Z" level=info msg="RemoveContainer for \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\"" May 10 10:51:06.020159 containerd[1527]: time="2025-05-10T10:51:06.020082927Z" level=info msg="RemoveContainer for \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\" returns successfully" May 10 10:51:06.020758 kubelet[2760]: I0510 10:51:06.020544 2760 scope.go:117] "RemoveContainer" containerID="98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253" May 10 10:51:06.021851 containerd[1527]: time="2025-05-10T10:51:06.021536294Z" level=error msg="ContainerStatus for \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\": not found" May 10 10:51:06.022042 kubelet[2760]: E0510 10:51:06.021871 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\": not found" containerID="98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253" May 10 10:51:06.022042 kubelet[2760]: I0510 10:51:06.021964 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253"} err="failed to get container status \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\": rpc error: code = NotFound desc = an error occurred when try to find container \"98ed2eb55a347dac1733044a93724a92c25a236224883ed4cc847f7e65e45253\": not found" May 10 10:51:06.022042 kubelet[2760]: I0510 10:51:06.022024 2760 scope.go:117] "RemoveContainer" containerID="7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f" May 10 10:51:06.022923 containerd[1527]: time="2025-05-10T10:51:06.022410344Z" level=error msg="ContainerStatus for \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\": not found" May 10 10:51:06.023124 kubelet[2760]: E0510 10:51:06.022928 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\": not found" containerID="7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f" May 10 10:51:06.023124 kubelet[2760]: I0510 10:51:06.023019 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f"} err="failed to get container status \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c9ad28eb11e80e3f7833ea33759895ef2a3a5bae0cb985c7129d898ee204d7f\": not found" May 10 10:51:06.023124 kubelet[2760]: I0510 10:51:06.023058 2760 scope.go:117] "RemoveContainer" containerID="dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2" May 10 10:51:06.023560 containerd[1527]: time="2025-05-10T10:51:06.023470103Z" level=error msg="ContainerStatus for \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\": not found" May 10 10:51:06.024146 kubelet[2760]: E0510 10:51:06.024038 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\": not found" containerID="dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2" May 10 10:51:06.024146 kubelet[2760]: I0510 10:51:06.024104 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2"} err="failed to get container status \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\": rpc error: code = NotFound desc = an error occurred when try to find container \"dddd1427925d22f623670bdf22f3fb8ec733e146cc31b926bc5f3978cf6f7aa2\": not found" May 10 10:51:06.024542 kubelet[2760]: I0510 10:51:06.024142 2760 scope.go:117] "RemoveContainer" containerID="4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586" May 10 10:51:06.024781 containerd[1527]: time="2025-05-10T10:51:06.024537215Z" level=error msg="ContainerStatus for \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\": not found" May 10 10:51:06.024918 kubelet[2760]: E0510 10:51:06.024861 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\": not found" containerID="4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586" May 10 10:51:06.025030 kubelet[2760]: I0510 10:51:06.024908 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586"} err="failed to get container status \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\": rpc error: code = NotFound desc = an error occurred when try to find container \"4662abd002d0cce9b35c088163a204fe5768252faff5abd6d424893b04efc586\": not found" May 10 10:51:06.025030 kubelet[2760]: I0510 10:51:06.024944 2760 scope.go:117] "RemoveContainer" containerID="862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111" May 10 10:51:06.026156 containerd[1527]: time="2025-05-10T10:51:06.026006492Z" level=error msg="ContainerStatus for \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\": not found" May 10 10:51:06.026590 kubelet[2760]: E0510 10:51:06.026462 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\": not found" containerID="862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111" May 10 10:51:06.026590 kubelet[2760]: I0510 10:51:06.026558 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111"} err="failed to get container status \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\": rpc error: code = NotFound desc = an error occurred when try to find container \"862ddaea881d2cfae7600c014cc6eeb395f3f87b375aec4e5cf9c3b8685b1111\": not found" May 10 10:51:06.044489 systemd[1]: Removed slice kubepods-besteffort-podfc5f0b2e_fa7c_4dac_afac_3bf87e9a9c0f.slice - libcontainer container kubepods-besteffort-podfc5f0b2e_fa7c_4dac_afac_3bf87e9a9c0f.slice. May 10 10:51:06.044941 systemd[1]: kubepods-besteffort-podfc5f0b2e_fa7c_4dac_afac_3bf87e9a9c0f.slice: Consumed 1.241s CPU time, 27.1M memory peak, 4K written to disk. May 10 10:51:06.102008 kubelet[2760]: I0510 10:51:06.101887 2760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b6bae0-f13b-41ea-adc6-b527c074bb1a" path="/var/lib/kubelet/pods/58b6bae0-f13b-41ea-adc6-b527c074bb1a/volumes" May 10 10:51:06.104972 kubelet[2760]: I0510 10:51:06.104490 2760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f" path="/var/lib/kubelet/pods/fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f/volumes" May 10 10:51:06.455605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282-shm.mount: Deactivated successfully. May 10 10:51:06.455959 systemd[1]: var-lib-kubelet-pods-58b6bae0\x2df13b\x2d41ea\x2dadc6\x2db527c074bb1a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 10:51:06.456148 systemd[1]: var-lib-kubelet-pods-58b6bae0\x2df13b\x2d41ea\x2dadc6\x2db527c074bb1a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 10:51:06.456339 systemd[1]: var-lib-kubelet-pods-58b6bae0\x2df13b\x2d41ea\x2dadc6\x2db527c074bb1a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvggwj.mount: Deactivated successfully. May 10 10:51:06.456521 systemd[1]: var-lib-kubelet-pods-fc5f0b2e\x2dfa7c\x2d4dac\x2dafac\x2d3bf87e9a9c0f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5pd7q.mount: Deactivated successfully. May 10 10:51:07.472650 sshd[4311]: Connection closed by 172.24.4.1 port 33250 May 10 10:51:07.473143 sshd-session[4306]: pam_unix(sshd:session): session closed for user core May 10 10:51:07.489754 systemd[1]: sshd@24-172.24.4.188:22-172.24.4.1:33250.service: Deactivated successfully. May 10 10:51:07.499116 systemd[1]: session-27.scope: Deactivated successfully. May 10 10:51:07.499595 systemd[1]: session-27.scope: Consumed 1.722s CPU time, 26M memory peak. May 10 10:51:07.502597 systemd-logind[1498]: Session 27 logged out. Waiting for processes to exit. May 10 10:51:07.512882 systemd[1]: Started sshd@25-172.24.4.188:22-172.24.4.1:34468.service - OpenSSH per-connection server daemon (172.24.4.1:34468). May 10 10:51:07.516895 systemd-logind[1498]: Removed session 27. May 10 10:51:08.861047 sshd[4463]: Accepted publickey for core from 172.24.4.1 port 34468 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:51:08.864472 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:51:08.878501 systemd-logind[1498]: New session 28 of user core. May 10 10:51:08.887028 systemd[1]: Started session-28.scope - Session 28 of User core. May 10 10:51:09.340920 kubelet[2760]: E0510 10:51:09.340758 2760 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 10:51:10.452806 kubelet[2760]: I0510 10:51:10.452651 2760 memory_manager.go:355] "RemoveStaleState removing state" podUID="58b6bae0-f13b-41ea-adc6-b527c074bb1a" containerName="cilium-agent" May 10 10:51:10.452806 kubelet[2760]: I0510 10:51:10.452769 2760 memory_manager.go:355] "RemoveStaleState removing state" podUID="fc5f0b2e-fa7c-4dac-afac-3bf87e9a9c0f" containerName="cilium-operator" May 10 10:51:10.465172 systemd[1]: Created slice kubepods-burstable-podb04061f2_a3d9_4cd7_96e7_a1d25b25314c.slice - libcontainer container kubepods-burstable-podb04061f2_a3d9_4cd7_96e7_a1d25b25314c.slice. May 10 10:51:10.507138 sshd[4466]: Connection closed by 172.24.4.1 port 34468 May 10 10:51:10.507824 sshd-session[4463]: pam_unix(sshd:session): session closed for user core May 10 10:51:10.523275 systemd[1]: sshd@25-172.24.4.188:22-172.24.4.1:34468.service: Deactivated successfully. May 10 10:51:10.529728 systemd[1]: session-28.scope: Deactivated successfully. May 10 10:51:10.530220 systemd[1]: session-28.scope: Consumed 1.026s CPU time, 23.8M memory peak. May 10 10:51:10.533058 systemd-logind[1498]: Session 28 logged out. Waiting for processes to exit. May 10 10:51:10.536961 systemd[1]: Started sshd@26-172.24.4.188:22-172.24.4.1:34484.service - OpenSSH per-connection server daemon (172.24.4.1:34484). May 10 10:51:10.543760 systemd-logind[1498]: Removed session 28. May 10 10:51:10.580294 kubelet[2760]: I0510 10:51:10.580249 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-cilium-cgroup\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.580569 kubelet[2760]: I0510 10:51:10.580489 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-cni-path\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.580676 kubelet[2760]: I0510 10:51:10.580659 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-host-proc-sys-kernel\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.580913 kubelet[2760]: I0510 10:51:10.580861 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-cilium-run\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581058 kubelet[2760]: I0510 10:51:10.581039 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-cilium-config-path\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581187 kubelet[2760]: I0510 10:51:10.581160 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-host-proc-sys-net\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581436 kubelet[2760]: I0510 10:51:10.581303 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-clustermesh-secrets\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581436 kubelet[2760]: I0510 10:51:10.581356 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-etc-cni-netd\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581614 kubelet[2760]: I0510 10:51:10.581413 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-hubble-tls\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581614 kubelet[2760]: I0510 10:51:10.581564 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-hostproc\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581614 kubelet[2760]: I0510 10:51:10.581585 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-xtables-lock\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581930 kubelet[2760]: I0510 10:51:10.581765 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l28rz\" (UniqueName: \"kubernetes.io/projected/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-kube-api-access-l28rz\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581930 kubelet[2760]: I0510 10:51:10.581808 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-bpf-maps\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581930 kubelet[2760]: I0510 10:51:10.581850 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-lib-modules\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.581930 kubelet[2760]: I0510 10:51:10.581884 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b04061f2-a3d9-4cd7-96e7-a1d25b25314c-cilium-ipsec-secrets\") pod \"cilium-hrfhk\" (UID: \"b04061f2-a3d9-4cd7-96e7-a1d25b25314c\") " pod="kube-system/cilium-hrfhk" May 10 10:51:10.771615 containerd[1527]: time="2025-05-10T10:51:10.771429846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hrfhk,Uid:b04061f2-a3d9-4cd7-96e7-a1d25b25314c,Namespace:kube-system,Attempt:0,}" May 10 10:51:10.819301 containerd[1527]: time="2025-05-10T10:51:10.818262523Z" level=info msg="connecting to shim 92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2" address="unix:///run/containerd/s/88ca1739dbb5e7ac5ee3d90d2f3220c11a0c7cd1d7cdc6fafe34c4e69d5f1557" namespace=k8s.io protocol=ttrpc version=3 May 10 10:51:10.873866 systemd[1]: Started cri-containerd-92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2.scope - libcontainer container 92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2. May 10 10:51:10.902891 containerd[1527]: time="2025-05-10T10:51:10.902633118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hrfhk,Uid:b04061f2-a3d9-4cd7-96e7-a1d25b25314c,Namespace:kube-system,Attempt:0,} returns sandbox id \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\"" May 10 10:51:10.908195 containerd[1527]: time="2025-05-10T10:51:10.906771013Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 10:51:10.923622 containerd[1527]: time="2025-05-10T10:51:10.923575505Z" level=info msg="Container 9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164: CDI devices from CRI Config.CDIDevices: []" May 10 10:51:10.938445 containerd[1527]: time="2025-05-10T10:51:10.938392879Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164\"" May 10 10:51:10.939756 containerd[1527]: time="2025-05-10T10:51:10.939620612Z" level=info msg="StartContainer for \"9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164\"" May 10 10:51:10.942179 containerd[1527]: time="2025-05-10T10:51:10.941931218Z" level=info msg="connecting to shim 9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164" address="unix:///run/containerd/s/88ca1739dbb5e7ac5ee3d90d2f3220c11a0c7cd1d7cdc6fafe34c4e69d5f1557" protocol=ttrpc version=3 May 10 10:51:10.969866 systemd[1]: Started cri-containerd-9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164.scope - libcontainer container 9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164. May 10 10:51:11.006141 containerd[1527]: time="2025-05-10T10:51:11.006015408Z" level=info msg="StartContainer for \"9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164\" returns successfully" May 10 10:51:11.016365 systemd[1]: cri-containerd-9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164.scope: Deactivated successfully. May 10 10:51:11.019313 containerd[1527]: time="2025-05-10T10:51:11.019216869Z" level=info msg="received exit event container_id:\"9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164\" id:\"9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164\" pid:4538 exited_at:{seconds:1746874271 nanos:18765351}" May 10 10:51:11.019313 containerd[1527]: time="2025-05-10T10:51:11.019267905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164\" id:\"9628d7e005fd5779c36280f3847d4b3a4c2ceb98c5d33f296b5769f7f1b6b164\" pid:4538 exited_at:{seconds:1746874271 nanos:18765351}" May 10 10:51:11.416775 kubelet[2760]: I0510 10:51:11.416568 2760 setters.go:602] "Node became not ready" node="ci-4330-0-0-n-4c6f505fd4.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T10:51:11Z","lastTransitionTime":"2025-05-10T10:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 10:51:11.842136 containerd[1527]: time="2025-05-10T10:51:11.839657307Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 10:51:11.863607 containerd[1527]: time="2025-05-10T10:51:11.862087606Z" level=info msg="Container f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee: CDI devices from CRI Config.CDIDevices: []" May 10 10:51:11.885415 sshd[4475]: Accepted publickey for core from 172.24.4.1 port 34484 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:51:11.888864 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:51:11.895019 containerd[1527]: time="2025-05-10T10:51:11.893591972Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee\"" May 10 10:51:11.898678 containerd[1527]: time="2025-05-10T10:51:11.898134936Z" level=info msg="StartContainer for \"f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee\"" May 10 10:51:11.905025 containerd[1527]: time="2025-05-10T10:51:11.903520081Z" level=info msg="connecting to shim f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee" address="unix:///run/containerd/s/88ca1739dbb5e7ac5ee3d90d2f3220c11a0c7cd1d7cdc6fafe34c4e69d5f1557" protocol=ttrpc version=3 May 10 10:51:11.909675 systemd-logind[1498]: New session 29 of user core. May 10 10:51:11.926034 systemd[1]: Started session-29.scope - Session 29 of User core. May 10 10:51:11.937879 systemd[1]: Started cri-containerd-f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee.scope - libcontainer container f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee. May 10 10:51:11.974239 containerd[1527]: time="2025-05-10T10:51:11.974152949Z" level=info msg="StartContainer for \"f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee\" returns successfully" May 10 10:51:11.980563 systemd[1]: cri-containerd-f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee.scope: Deactivated successfully. May 10 10:51:11.982412 containerd[1527]: time="2025-05-10T10:51:11.982243419Z" level=info msg="received exit event container_id:\"f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee\" id:\"f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee\" pid:4584 exited_at:{seconds:1746874271 nanos:981955449}" May 10 10:51:11.983087 containerd[1527]: time="2025-05-10T10:51:11.983044062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee\" id:\"f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee\" pid:4584 exited_at:{seconds:1746874271 nanos:981955449}" May 10 10:51:12.008327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f446b8cda90ace96ca2bbc31a14b2b6d0c1747d7b1bfd3c34f2fc5dab41993ee-rootfs.mount: Deactivated successfully. May 10 10:51:12.479777 sshd[4582]: Connection closed by 172.24.4.1 port 34484 May 10 10:51:12.480446 sshd-session[4475]: pam_unix(sshd:session): session closed for user core May 10 10:51:12.501434 systemd[1]: sshd@26-172.24.4.188:22-172.24.4.1:34484.service: Deactivated successfully. May 10 10:51:12.507423 systemd[1]: session-29.scope: Deactivated successfully. May 10 10:51:12.509757 systemd-logind[1498]: Session 29 logged out. Waiting for processes to exit. May 10 10:51:12.516369 systemd[1]: Started sshd@27-172.24.4.188:22-172.24.4.1:34496.service - OpenSSH per-connection server daemon (172.24.4.1:34496). May 10 10:51:12.520842 systemd-logind[1498]: Removed session 29. May 10 10:51:12.853022 containerd[1527]: time="2025-05-10T10:51:12.851317621Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 10:51:12.891005 containerd[1527]: time="2025-05-10T10:51:12.887435624Z" level=info msg="Container fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8: CDI devices from CRI Config.CDIDevices: []" May 10 10:51:12.916492 containerd[1527]: time="2025-05-10T10:51:12.916444077Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8\"" May 10 10:51:12.917984 containerd[1527]: time="2025-05-10T10:51:12.917351229Z" level=info msg="StartContainer for \"fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8\"" May 10 10:51:12.919669 containerd[1527]: time="2025-05-10T10:51:12.919603836Z" level=info msg="connecting to shim fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8" address="unix:///run/containerd/s/88ca1739dbb5e7ac5ee3d90d2f3220c11a0c7cd1d7cdc6fafe34c4e69d5f1557" protocol=ttrpc version=3 May 10 10:51:12.947890 systemd[1]: Started cri-containerd-fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8.scope - libcontainer container fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8. May 10 10:51:13.004404 containerd[1527]: time="2025-05-10T10:51:13.004359896Z" level=info msg="StartContainer for \"fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8\" returns successfully" May 10 10:51:13.005220 systemd[1]: cri-containerd-fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8.scope: Deactivated successfully. May 10 10:51:13.011392 containerd[1527]: time="2025-05-10T10:51:13.011349951Z" level=info msg="received exit event container_id:\"fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8\" id:\"fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8\" pid:4635 exited_at:{seconds:1746874273 nanos:11100033}" May 10 10:51:13.012076 containerd[1527]: time="2025-05-10T10:51:13.011768126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8\" id:\"fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8\" pid:4635 exited_at:{seconds:1746874273 nanos:11100033}" May 10 10:51:13.036363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd7e0146a05899688d262b01579683e193e3cf2fd0b14784edd4db0d75e774b8-rootfs.mount: Deactivated successfully. May 10 10:51:13.880235 containerd[1527]: time="2025-05-10T10:51:13.880106583Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 10:51:13.966294 sshd[4620]: Accepted publickey for core from 172.24.4.1 port 34496 ssh2: RSA SHA256:KJPMTbzVpA/z4Q0YiJhVuiYABmBXzBBZVku5cZVzxpg May 10 10:51:13.969577 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:51:13.984810 systemd-logind[1498]: New session 30 of user core. May 10 10:51:13.996046 systemd[1]: Started session-30.scope - Session 30 of User core. May 10 10:51:14.079783 containerd[1527]: time="2025-05-10T10:51:14.078032690Z" level=info msg="Container ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08: CDI devices from CRI Config.CDIDevices: []" May 10 10:51:14.111828 containerd[1527]: time="2025-05-10T10:51:14.111649380Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08\"" May 10 10:51:14.116743 containerd[1527]: time="2025-05-10T10:51:14.113031092Z" level=info msg="StartContainer for \"ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08\"" May 10 10:51:14.120784 containerd[1527]: time="2025-05-10T10:51:14.118652391Z" level=info msg="connecting to shim ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08" address="unix:///run/containerd/s/88ca1739dbb5e7ac5ee3d90d2f3220c11a0c7cd1d7cdc6fafe34c4e69d5f1557" protocol=ttrpc version=3 May 10 10:51:14.168893 systemd[1]: Started cri-containerd-ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08.scope - libcontainer container ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08. May 10 10:51:14.210617 systemd[1]: cri-containerd-ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08.scope: Deactivated successfully. May 10 10:51:14.212489 containerd[1527]: time="2025-05-10T10:51:14.212083410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08\" id:\"ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08\" pid:4678 exited_at:{seconds:1746874274 nanos:211255195}" May 10 10:51:14.215435 containerd[1527]: time="2025-05-10T10:51:14.215289436Z" level=info msg="received exit event container_id:\"ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08\" id:\"ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08\" pid:4678 exited_at:{seconds:1746874274 nanos:211255195}" May 10 10:51:14.225191 containerd[1527]: time="2025-05-10T10:51:14.224974108Z" level=info msg="StartContainer for \"ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08\" returns successfully" May 10 10:51:14.239603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad0a1612f4d6f867dcd56c365c1bc0e70c1981cbdfada7a55ce31052708dfc08-rootfs.mount: Deactivated successfully. May 10 10:51:14.342530 kubelet[2760]: E0510 10:51:14.342365 2760 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 10:51:14.900107 containerd[1527]: time="2025-05-10T10:51:14.899877780Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 10:51:14.927921 containerd[1527]: time="2025-05-10T10:51:14.926637362Z" level=info msg="Container 3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7: CDI devices from CRI Config.CDIDevices: []" May 10 10:51:14.957439 containerd[1527]: time="2025-05-10T10:51:14.957356435Z" level=info msg="CreateContainer within sandbox \"92f59343b07c0b7c7e1b755d37235e24ab4e03a3310d425e8860daef60766be2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7\"" May 10 10:51:14.958807 containerd[1527]: time="2025-05-10T10:51:14.958361721Z" level=info msg="StartContainer for \"3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7\"" May 10 10:51:14.961171 containerd[1527]: time="2025-05-10T10:51:14.961104558Z" level=info msg="connecting to shim 3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7" address="unix:///run/containerd/s/88ca1739dbb5e7ac5ee3d90d2f3220c11a0c7cd1d7cdc6fafe34c4e69d5f1557" protocol=ttrpc version=3 May 10 10:51:14.984871 systemd[1]: Started cri-containerd-3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7.scope - libcontainer container 3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7. May 10 10:51:15.032096 containerd[1527]: time="2025-05-10T10:51:15.031985584Z" level=info msg="StartContainer for \"3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7\" returns successfully" May 10 10:51:15.153430 containerd[1527]: time="2025-05-10T10:51:15.153137469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7\" id:\"10c0361548fe67667610c802d43139efddfa9d747a55cc5c8de92bcf842efdb7\" pid:4751 exited_at:{seconds:1746874275 nanos:152761654}" May 10 10:51:15.514973 kernel: cryptd: max_cpu_qlen set to 1000 May 10 10:51:15.568831 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 10 10:51:15.603745 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 May 10 10:51:15.623721 kernel: DRBG: Continuing without Jitter RNG May 10 10:51:16.624100 containerd[1527]: time="2025-05-10T10:51:16.624044772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7\" id:\"14afa9b5b670f5702a1fa6040c9a621e8a1d700090131216cb75b519d7f0a4d3\" pid:4889 exit_status:1 exited_at:{seconds:1746874276 nanos:622919561}" May 10 10:51:18.814294 containerd[1527]: time="2025-05-10T10:51:18.814227006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7\" id:\"032eabd2d4622753f3a686113a077eb389ef33719c632d6d5657f1ad5cf151b5\" pid:5267 exit_status:1 exited_at:{seconds:1746874278 nanos:813819641}" May 10 10:51:18.945843 systemd-networkd[1420]: lxc_health: Link UP May 10 10:51:18.968874 systemd-networkd[1420]: lxc_health: Gained carrier May 10 10:51:20.826216 kubelet[2760]: I0510 10:51:20.825266 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hrfhk" podStartSLOduration=10.825201267 podStartE2EDuration="10.825201267s" podCreationTimestamp="2025-05-10 10:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:51:15.925997735 +0000 UTC m=+351.976671431" watchObservedRunningTime="2025-05-10 10:51:20.825201267 +0000 UTC m=+356.875874512" May 10 10:51:20.895866 systemd-networkd[1420]: lxc_health: Gained IPv6LL May 10 10:51:21.173195 containerd[1527]: time="2025-05-10T10:51:21.173146441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7\" id:\"8229dc07f1ba4a21d9723c33eb080570858f63c4a11db0941489b3f37c1fa42a\" pid:5363 exited_at:{seconds:1746874281 nanos:172745809}" May 10 10:51:23.363828 containerd[1527]: time="2025-05-10T10:51:23.363755872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7\" id:\"3ff710fc5802b1b63a6b12e71b61f1b51bcbca40835d3d8051ebe297d54366f5\" pid:5388 exited_at:{seconds:1746874283 nanos:361641845}" May 10 10:51:24.145854 containerd[1527]: time="2025-05-10T10:51:24.145586348Z" level=info msg="StopPodSandbox for \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\"" May 10 10:51:24.147027 containerd[1527]: time="2025-05-10T10:51:24.146436212Z" level=info msg="TearDown network for sandbox \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" successfully" May 10 10:51:24.147027 containerd[1527]: time="2025-05-10T10:51:24.146502586Z" level=info msg="StopPodSandbox for \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" returns successfully" May 10 10:51:24.147638 containerd[1527]: time="2025-05-10T10:51:24.147125075Z" level=info msg="RemovePodSandbox for \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\"" May 10 10:51:24.147638 containerd[1527]: time="2025-05-10T10:51:24.147178234Z" level=info msg="Forcibly stopping sandbox \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\"" May 10 10:51:24.147638 containerd[1527]: time="2025-05-10T10:51:24.147299432Z" level=info msg="TearDown network for sandbox \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" successfully" May 10 10:51:24.148846 containerd[1527]: time="2025-05-10T10:51:24.148810117Z" level=info msg="Ensure that sandbox ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78 in task-service has been cleanup successfully" May 10 10:51:24.152646 containerd[1527]: time="2025-05-10T10:51:24.152607002Z" level=info msg="RemovePodSandbox \"ed7fba119ac00bc0b25c560a2e380c59a4dc2194369c30e9bb0d9719082e8a78\" returns successfully" May 10 10:51:24.156241 containerd[1527]: time="2025-05-10T10:51:24.155917544Z" level=info msg="StopPodSandbox for \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\"" May 10 10:51:24.156241 containerd[1527]: time="2025-05-10T10:51:24.156046486Z" level=info msg="TearDown network for sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" successfully" May 10 10:51:24.156241 containerd[1527]: time="2025-05-10T10:51:24.156062095Z" level=info msg="StopPodSandbox for \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" returns successfully" May 10 10:51:24.157017 containerd[1527]: time="2025-05-10T10:51:24.156665538Z" level=info msg="RemovePodSandbox for \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\"" May 10 10:51:24.157017 containerd[1527]: time="2025-05-10T10:51:24.156719008Z" level=info msg="Forcibly stopping sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\"" May 10 10:51:24.157017 containerd[1527]: time="2025-05-10T10:51:24.156802765Z" level=info msg="TearDown network for sandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" successfully" May 10 10:51:24.161571 containerd[1527]: time="2025-05-10T10:51:24.159676037Z" level=info msg="Ensure that sandbox 34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282 in task-service has been cleanup successfully" May 10 10:51:24.167503 containerd[1527]: time="2025-05-10T10:51:24.167406502Z" level=info msg="RemovePodSandbox \"34578985d17308cbfe47aa663ce7aed86a018217acfecfab7d1fd145a39ae282\" returns successfully" May 10 10:51:25.583455 containerd[1527]: time="2025-05-10T10:51:25.583269659Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3682fa726c8e346b2cb0ae2e727005adc93cab0c3953e350da4b58c0c924dbb7\" id:\"c6d90f8acc48ad09aa092ec05e92d7ec4dec2c05f0a757109742fa4f9ad7e141\" pid:5420 exited_at:{seconds:1746874285 nanos:582880168}" May 10 10:51:25.850329 sshd[4664]: Connection closed by 172.24.4.1 port 34496 May 10 10:51:25.852415 sshd-session[4620]: pam_unix(sshd:session): session closed for user core May 10 10:51:25.862657 systemd-logind[1498]: Session 30 logged out. Waiting for processes to exit. May 10 10:51:25.863767 systemd[1]: sshd@27-172.24.4.188:22-172.24.4.1:34496.service: Deactivated successfully. May 10 10:51:25.870622 systemd[1]: session-30.scope: Deactivated successfully. May 10 10:51:25.875923 systemd-logind[1498]: Removed session 30. May 10 10:53:07.038109 update_engine[1504]: I20250510 10:53:07.037597 1504 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 10 10:53:07.038109 update_engine[1504]: I20250510 10:53:07.037922 1504 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 10 10:53:07.039988 update_engine[1504]: I20250510 10:53:07.038645 1504 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 10 10:53:07.040506 update_engine[1504]: I20250510 10:53:07.040418 1504 omaha_request_params.cc:62] Current group set to developer May 10 10:53:07.041190 update_engine[1504]: I20250510 10:53:07.041113 1504 update_attempter.cc:499] Already updated boot flags. Skipping. May 10 10:53:07.041190 update_engine[1504]: I20250510 10:53:07.041162 1504 update_attempter.cc:643] Scheduling an action processor start. May 10 10:53:07.041342 update_engine[1504]: I20250510 10:53:07.041225 1504 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 10:53:07.041794 update_engine[1504]: I20250510 10:53:07.041421 1504 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 10 10:53:07.041794 update_engine[1504]: I20250510 10:53:07.041613 1504 omaha_request_action.cc:271] Posting an Omaha request to disabled May 10 10:53:07.041794 update_engine[1504]: I20250510 10:53:07.041640 1504 omaha_request_action.cc:272] Request: May 10 10:53:07.041794 update_engine[1504]: May 10 10:53:07.041794 update_engine[1504]: May 10 10:53:07.041794 update_engine[1504]: May 10 10:53:07.041794 update_engine[1504]: May 10 10:53:07.041794 update_engine[1504]: May 10 10:53:07.041794 update_engine[1504]: May 10 10:53:07.041794 update_engine[1504]: May 10 10:53:07.041794 update_engine[1504]: May 10 10:53:07.041794 update_engine[1504]: I20250510 10:53:07.041678 1504 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 10:53:07.048766 update_engine[1504]: I20250510 10:53:07.048154 1504 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 10:53:07.049541 locksmithd[1551]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 10 10:53:07.050178 update_engine[1504]: I20250510 10:53:07.049892 1504 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 10:53:07.058317 update_engine[1504]: E20250510 10:53:07.058192 1504 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 10:53:07.058495 update_engine[1504]: I20250510 10:53:07.058441 1504 libcurl_http_fetcher.cc:283] No HTTP response, retry 1