May 10 09:59:06.096319 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat May 10 08:33:52 -00 2025 May 10 09:59:06.096344 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 09:59:06.096357 kernel: BIOS-provided physical RAM map: May 10 09:59:06.096365 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 09:59:06.096372 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 09:59:06.096380 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 09:59:06.096389 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 10 09:59:06.096396 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 10 09:59:06.096404 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 09:59:06.096412 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 09:59:06.096419 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 10 09:59:06.096429 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 10 09:59:06.096436 kernel: NX (Execute Disable) protection: active May 10 09:59:06.096444 kernel: APIC: Static calls initialized May 10 09:59:06.096453 kernel: SMBIOS 3.0.0 present. May 10 09:59:06.096461 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 10 09:59:06.096471 kernel: Hypervisor detected: KVM May 10 09:59:06.096479 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 09:59:06.096487 kernel: kvm-clock: using sched offset of 4713133137 cycles May 10 09:59:06.096495 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 09:59:06.096504 kernel: tsc: Detected 1996.249 MHz processor May 10 09:59:06.096512 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 09:59:06.096521 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 09:59:06.096530 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 10 09:59:06.096538 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 10 09:59:06.096547 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 09:59:06.096557 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 10 09:59:06.096565 kernel: ACPI: Early table checksum verification disabled May 10 09:59:06.096574 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 10 09:59:06.096582 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:59:06.096590 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:59:06.096598 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:59:06.096606 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 10 09:59:06.096615 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:59:06.096623 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:59:06.096633 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 10 09:59:06.096641 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 10 09:59:06.096649 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 10 09:59:06.096658 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 10 09:59:06.096666 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 10 09:59:06.096677 kernel: No NUMA configuration found May 10 09:59:06.096687 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 10 09:59:06.096696 kernel: NODE_DATA(0) allocated [mem 0x13fff5000-0x13fffcfff] May 10 09:59:06.096705 kernel: Zone ranges: May 10 09:59:06.096713 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 09:59:06.096722 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 10 09:59:06.096730 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 10 09:59:06.096739 kernel: Device empty May 10 09:59:06.096747 kernel: Movable zone start for each node May 10 09:59:06.096757 kernel: Early memory node ranges May 10 09:59:06.096766 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 09:59:06.096774 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 10 09:59:06.096783 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 10 09:59:06.096792 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 10 09:59:06.096800 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 09:59:06.096809 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 09:59:06.096818 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 10 09:59:06.096826 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 09:59:06.096835 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 09:59:06.096860 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 09:59:06.096869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 09:59:06.096888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 09:59:06.096922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 09:59:06.096955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 09:59:06.096982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 09:59:06.096991 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 09:59:06.096999 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 10 09:59:06.097008 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 10 09:59:06.097020 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 10 09:59:06.097029 kernel: Booting paravirtualized kernel on KVM May 10 09:59:06.097038 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 09:59:06.097047 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 10 09:59:06.097055 kernel: percpu: Embedded 58 pages/cpu s197416 r8192 d31960 u1048576 May 10 09:59:06.097064 kernel: pcpu-alloc: s197416 r8192 d31960 u1048576 alloc=1*2097152 May 10 09:59:06.097072 kernel: pcpu-alloc: [0] 0 1 May 10 09:59:06.097081 kernel: kvm-guest: PV spinlocks disabled, no host support May 10 09:59:06.097091 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 09:59:06.097102 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 09:59:06.097110 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 09:59:06.097119 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 09:59:06.097128 kernel: Fallback order for Node 0: 0 May 10 09:59:06.097136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 10 09:59:06.097145 kernel: Policy zone: Normal May 10 09:59:06.097154 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 09:59:06.097164 kernel: software IO TLB: area num 2. May 10 09:59:06.097173 kernel: Memory: 3968244K/4193772K available (14336K kernel code, 2309K rwdata, 9044K rodata, 53680K init, 1596K bss, 225268K reserved, 0K cma-reserved) May 10 09:59:06.097182 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 09:59:06.097191 kernel: ftrace: allocating 38190 entries in 150 pages May 10 09:59:06.097199 kernel: ftrace: allocated 150 pages with 4 groups May 10 09:59:06.097208 kernel: Dynamic Preempt: voluntary May 10 09:59:06.097217 kernel: rcu: Preemptible hierarchical RCU implementation. May 10 09:59:06.097229 kernel: rcu: RCU event tracing is enabled. May 10 09:59:06.097238 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 09:59:06.097271 kernel: Trampoline variant of Tasks RCU enabled. May 10 09:59:06.097281 kernel: Rude variant of Tasks RCU enabled. May 10 09:59:06.097290 kernel: Tracing variant of Tasks RCU enabled. May 10 09:59:06.097299 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 09:59:06.097307 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 09:59:06.097316 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 10 09:59:06.097324 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 10 09:59:06.097333 kernel: Console: colour VGA+ 80x25 May 10 09:59:06.097341 kernel: printk: console [tty0] enabled May 10 09:59:06.097350 kernel: printk: console [ttyS0] enabled May 10 09:59:06.097361 kernel: ACPI: Core revision 20230628 May 10 09:59:06.097370 kernel: APIC: Switch to symmetric I/O mode setup May 10 09:59:06.097379 kernel: x2apic enabled May 10 09:59:06.097387 kernel: APIC: Switched APIC routing to: physical x2apic May 10 09:59:06.097396 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 10 09:59:06.097405 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 10 09:59:06.097413 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 10 09:59:06.097422 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 10 09:59:06.097431 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 10 09:59:06.097441 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 09:59:06.097450 kernel: Spectre V2 : Mitigation: Retpolines May 10 09:59:06.097459 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 09:59:06.097467 kernel: Speculative Store Bypass: Vulnerable May 10 09:59:06.097476 kernel: x86/fpu: x87 FPU will use FXSAVE May 10 09:59:06.097485 kernel: Freeing SMP alternatives memory: 32K May 10 09:59:06.097500 kernel: pid_max: default: 32768 minimum: 301 May 10 09:59:06.097510 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 10 09:59:06.097519 kernel: landlock: Up and running. May 10 09:59:06.097528 kernel: SELinux: Initializing. May 10 09:59:06.097538 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 09:59:06.097547 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 09:59:06.097558 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 10 09:59:06.097567 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 09:59:06.097576 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 09:59:06.097586 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 09:59:06.097596 kernel: Performance Events: AMD PMU driver. May 10 09:59:06.097605 kernel: ... version: 0 May 10 09:59:06.097614 kernel: ... bit width: 48 May 10 09:59:06.097623 kernel: ... generic registers: 4 May 10 09:59:06.097632 kernel: ... value mask: 0000ffffffffffff May 10 09:59:06.097641 kernel: ... max period: 00007fffffffffff May 10 09:59:06.097650 kernel: ... fixed-purpose events: 0 May 10 09:59:06.097659 kernel: ... event mask: 000000000000000f May 10 09:59:06.097668 kernel: signal: max sigframe size: 1440 May 10 09:59:06.097679 kernel: rcu: Hierarchical SRCU implementation. May 10 09:59:06.097688 kernel: rcu: Max phase no-delay instances is 400. May 10 09:59:06.097697 kernel: smp: Bringing up secondary CPUs ... May 10 09:59:06.097706 kernel: smpboot: x86: Booting SMP configuration: May 10 09:59:06.097715 kernel: .... node #0, CPUs: #1 May 10 09:59:06.097724 kernel: smp: Brought up 1 node, 2 CPUs May 10 09:59:06.097733 kernel: smpboot: Max logical packages: 2 May 10 09:59:06.097742 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 10 09:59:06.097750 kernel: devtmpfs: initialized May 10 09:59:06.097759 kernel: x86/mm: Memory block size: 128MB May 10 09:59:06.097771 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 09:59:06.097780 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 09:59:06.097789 kernel: pinctrl core: initialized pinctrl subsystem May 10 09:59:06.097798 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 09:59:06.097807 kernel: audit: initializing netlink subsys (disabled) May 10 09:59:06.097816 kernel: audit: type=2000 audit(1746871142.835:1): state=initialized audit_enabled=0 res=1 May 10 09:59:06.097825 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 09:59:06.097834 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 09:59:06.097843 kernel: cpuidle: using governor menu May 10 09:59:06.097853 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 09:59:06.097862 kernel: dca service started, version 1.12.1 May 10 09:59:06.097871 kernel: PCI: Using configuration type 1 for base access May 10 09:59:06.097880 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 09:59:06.097889 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 10 09:59:06.097898 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 10 09:59:06.097907 kernel: ACPI: Added _OSI(Module Device) May 10 09:59:06.097916 kernel: ACPI: Added _OSI(Processor Device) May 10 09:59:06.097925 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 09:59:06.097936 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 09:59:06.097945 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 09:59:06.097954 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 10 09:59:06.097963 kernel: ACPI: Interpreter enabled May 10 09:59:06.097972 kernel: ACPI: PM: (supports S0 S3 S5) May 10 09:59:06.097981 kernel: ACPI: Using IOAPIC for interrupt routing May 10 09:59:06.097990 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 09:59:06.097999 kernel: PCI: Using E820 reservations for host bridge windows May 10 09:59:06.098008 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 10 09:59:06.098019 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 09:59:06.098157 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 10 09:59:06.099313 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 10 09:59:06.099413 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 10 09:59:06.099427 kernel: acpiphp: Slot [3] registered May 10 09:59:06.099436 kernel: acpiphp: Slot [4] registered May 10 09:59:06.099445 kernel: acpiphp: Slot [5] registered May 10 09:59:06.099459 kernel: acpiphp: Slot [6] registered May 10 09:59:06.099468 kernel: acpiphp: Slot [7] registered May 10 09:59:06.099477 kernel: acpiphp: Slot [8] registered May 10 09:59:06.099485 kernel: acpiphp: Slot [9] registered May 10 09:59:06.099495 kernel: acpiphp: Slot [10] registered May 10 09:59:06.099504 kernel: acpiphp: Slot [11] registered May 10 09:59:06.099513 kernel: acpiphp: Slot [12] registered May 10 09:59:06.099522 kernel: acpiphp: Slot [13] registered May 10 09:59:06.099531 kernel: acpiphp: Slot [14] registered May 10 09:59:06.099541 kernel: acpiphp: Slot [15] registered May 10 09:59:06.099550 kernel: acpiphp: Slot [16] registered May 10 09:59:06.099559 kernel: acpiphp: Slot [17] registered May 10 09:59:06.099568 kernel: acpiphp: Slot [18] registered May 10 09:59:06.099577 kernel: acpiphp: Slot [19] registered May 10 09:59:06.099586 kernel: acpiphp: Slot [20] registered May 10 09:59:06.099595 kernel: acpiphp: Slot [21] registered May 10 09:59:06.099604 kernel: acpiphp: Slot [22] registered May 10 09:59:06.099613 kernel: acpiphp: Slot [23] registered May 10 09:59:06.099622 kernel: acpiphp: Slot [24] registered May 10 09:59:06.099632 kernel: acpiphp: Slot [25] registered May 10 09:59:06.099641 kernel: acpiphp: Slot [26] registered May 10 09:59:06.099650 kernel: acpiphp: Slot [27] registered May 10 09:59:06.099659 kernel: acpiphp: Slot [28] registered May 10 09:59:06.099668 kernel: acpiphp: Slot [29] registered May 10 09:59:06.099677 kernel: acpiphp: Slot [30] registered May 10 09:59:06.099686 kernel: acpiphp: Slot [31] registered May 10 09:59:06.099695 kernel: PCI host bridge to bus 0000:00 May 10 09:59:06.099790 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 09:59:06.099879 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 09:59:06.099961 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 09:59:06.100042 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 09:59:06.100122 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 10 09:59:06.100202 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 09:59:06.100373 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 10 09:59:06.100496 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 10 09:59:06.100598 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 10 09:59:06.100690 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 10 09:59:06.100780 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 10 09:59:06.100873 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 10 09:59:06.100965 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 10 09:59:06.101055 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 10 09:59:06.101159 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 10 09:59:06.104327 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 10 09:59:06.104423 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 10 09:59:06.104527 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 10 09:59:06.104622 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 10 09:59:06.104716 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 10 09:59:06.104815 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 10 09:59:06.104907 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 10 09:59:06.104999 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 09:59:06.105099 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 10 09:59:06.105192 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 10 09:59:06.105334 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 10 09:59:06.105428 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 10 09:59:06.105524 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 10 09:59:06.105621 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 10 09:59:06.105712 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 10 09:59:06.105802 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 10 09:59:06.105892 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 10 09:59:06.105988 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 10 09:59:06.106079 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 10 09:59:06.106174 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 10 09:59:06.108308 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 10 09:59:06.108408 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 10 09:59:06.108499 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 10 09:59:06.108589 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 10 09:59:06.108603 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 09:59:06.108613 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 09:59:06.108622 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 09:59:06.108636 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 09:59:06.108645 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 10 09:59:06.108654 kernel: iommu: Default domain type: Translated May 10 09:59:06.108663 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 09:59:06.108672 kernel: PCI: Using ACPI for IRQ routing May 10 09:59:06.108681 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 09:59:06.108691 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 09:59:06.108700 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 10 09:59:06.108789 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 10 09:59:06.108885 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 10 09:59:06.108976 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 09:59:06.108989 kernel: vgaarb: loaded May 10 09:59:06.108998 kernel: clocksource: Switched to clocksource kvm-clock May 10 09:59:06.109008 kernel: VFS: Disk quotas dquot_6.6.0 May 10 09:59:06.109017 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 09:59:06.109026 kernel: pnp: PnP ACPI init May 10 09:59:06.109117 kernel: pnp 00:03: [dma 2] May 10 09:59:06.109135 kernel: pnp: PnP ACPI: found 5 devices May 10 09:59:06.109144 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 09:59:06.109153 kernel: NET: Registered PF_INET protocol family May 10 09:59:06.109162 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 09:59:06.109172 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 09:59:06.109181 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 09:59:06.109190 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 09:59:06.109199 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 10 09:59:06.109209 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 09:59:06.109220 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 09:59:06.109229 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 09:59:06.109238 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 09:59:06.109296 kernel: NET: Registered PF_XDP protocol family May 10 09:59:06.109385 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 09:59:06.109466 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 09:59:06.109544 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 09:59:06.109623 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 10 09:59:06.109706 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 10 09:59:06.109797 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 10 09:59:06.109888 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 10 09:59:06.109901 kernel: PCI: CLS 0 bytes, default 64 May 10 09:59:06.109911 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 10 09:59:06.109920 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 10 09:59:06.109929 kernel: Initialise system trusted keyrings May 10 09:59:06.109938 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 09:59:06.109951 kernel: Key type asymmetric registered May 10 09:59:06.109959 kernel: Asymmetric key parser 'x509' registered May 10 09:59:06.109968 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 10 09:59:06.109978 kernel: io scheduler mq-deadline registered May 10 09:59:06.109987 kernel: io scheduler kyber registered May 10 09:59:06.109996 kernel: io scheduler bfq registered May 10 09:59:06.110005 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 09:59:06.110014 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 10 09:59:06.110024 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 10 09:59:06.110035 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 10 09:59:06.110044 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 10 09:59:06.110054 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 09:59:06.110063 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 09:59:06.110072 kernel: random: crng init done May 10 09:59:06.110081 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 09:59:06.110090 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 09:59:06.110099 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 09:59:06.110192 kernel: rtc_cmos 00:04: RTC can wake from S4 May 10 09:59:06.111325 kernel: rtc_cmos 00:04: registered as rtc0 May 10 09:59:06.111342 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 09:59:06.111424 kernel: rtc_cmos 00:04: setting system clock to 2025-05-10T09:59:05 UTC (1746871145) May 10 09:59:06.111507 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 10 09:59:06.111521 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 10 09:59:06.111530 kernel: NET: Registered PF_INET6 protocol family May 10 09:59:06.111540 kernel: Segment Routing with IPv6 May 10 09:59:06.111549 kernel: In-situ OAM (IOAM) with IPv6 May 10 09:59:06.111562 kernel: NET: Registered PF_PACKET protocol family May 10 09:59:06.111571 kernel: Key type dns_resolver registered May 10 09:59:06.111580 kernel: IPI shorthand broadcast: enabled May 10 09:59:06.111589 kernel: sched_clock: Marking stable (3584007761, 182465957)->(3804284495, -37810777) May 10 09:59:06.111599 kernel: registered taskstats version 1 May 10 09:59:06.111608 kernel: Loading compiled-in X.509 certificates May 10 09:59:06.111617 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: f8080549509982706805ea0b811f8f4bcb4a274e' May 10 09:59:06.111626 kernel: Key type .fscrypt registered May 10 09:59:06.111635 kernel: Key type fscrypt-provisioning registered May 10 09:59:06.111646 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 09:59:06.111655 kernel: ima: Allocated hash algorithm: sha1 May 10 09:59:06.111664 kernel: ima: No architecture policies found May 10 09:59:06.111673 kernel: clk: Disabling unused clocks May 10 09:59:06.111682 kernel: Warning: unable to open an initial console. May 10 09:59:06.111692 kernel: Freeing unused kernel image (initmem) memory: 53680K May 10 09:59:06.111701 kernel: Write protecting the kernel read-only data: 24576k May 10 09:59:06.111710 kernel: Freeing unused kernel image (rodata/data gap) memory: 1196K May 10 09:59:06.111719 kernel: Run /init as init process May 10 09:59:06.111730 kernel: with arguments: May 10 09:59:06.111739 kernel: /init May 10 09:59:06.111748 kernel: with environment: May 10 09:59:06.111757 kernel: HOME=/ May 10 09:59:06.111765 kernel: TERM=linux May 10 09:59:06.111774 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 09:59:06.111785 systemd[1]: Successfully made /usr/ read-only. May 10 09:59:06.111798 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 10 09:59:06.111810 systemd[1]: Detected virtualization kvm. May 10 09:59:06.111819 systemd[1]: Detected architecture x86-64. May 10 09:59:06.111829 systemd[1]: Running in initrd. May 10 09:59:06.111838 systemd[1]: No hostname configured, using default hostname. May 10 09:59:06.111849 systemd[1]: Hostname set to . May 10 09:59:06.111858 systemd[1]: Initializing machine ID from VM UUID. May 10 09:59:06.111868 systemd[1]: Queued start job for default target initrd.target. May 10 09:59:06.111880 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 09:59:06.111899 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 09:59:06.111911 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 10 09:59:06.111921 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 09:59:06.111931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 10 09:59:06.111944 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 10 09:59:06.111955 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 10 09:59:06.111965 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 10 09:59:06.111975 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 09:59:06.111985 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 09:59:06.111995 systemd[1]: Reached target paths.target - Path Units. May 10 09:59:06.112005 systemd[1]: Reached target slices.target - Slice Units. May 10 09:59:06.112015 systemd[1]: Reached target swap.target - Swaps. May 10 09:59:06.112026 systemd[1]: Reached target timers.target - Timer Units. May 10 09:59:06.112036 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 10 09:59:06.112046 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 09:59:06.112056 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 09:59:06.112066 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 10 09:59:06.112076 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 09:59:06.112086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 09:59:06.112097 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 09:59:06.112106 systemd[1]: Reached target sockets.target - Socket Units. May 10 09:59:06.112118 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 10 09:59:06.112128 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 09:59:06.112138 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 10 09:59:06.112149 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 10 09:59:06.112159 systemd[1]: Starting systemd-fsck-usr.service... May 10 09:59:06.112169 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 09:59:06.112179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 09:59:06.112189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:59:06.112200 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 10 09:59:06.112211 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 09:59:06.112221 systemd[1]: Finished systemd-fsck-usr.service. May 10 09:59:06.112234 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 09:59:06.113298 systemd-journald[186]: Collecting audit messages is disabled. May 10 09:59:06.113330 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 09:59:06.113343 systemd-journald[186]: Journal started May 10 09:59:06.113365 systemd-journald[186]: Runtime Journal (/run/log/journal/47e66d72309b4c0e96292d5f48722efa) is 8M, max 78.5M, 70.5M free. May 10 09:59:06.099133 systemd-modules-load[188]: Inserted module 'overlay' May 10 09:59:06.149524 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 09:59:06.149553 kernel: Bridge firewalling registered May 10 09:59:06.149575 systemd[1]: Started systemd-journald.service - Journal Service. May 10 09:59:06.127659 systemd-modules-load[188]: Inserted module 'br_netfilter' May 10 09:59:06.150088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 09:59:06.151054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:59:06.154435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 09:59:06.157349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 09:59:06.165349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 09:59:06.167546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 09:59:06.172185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 09:59:06.183225 systemd-tmpfiles[204]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 10 09:59:06.183749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 09:59:06.192009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 09:59:06.192768 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 09:59:06.195393 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 10 09:59:06.197356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 09:59:06.219275 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 09:59:06.237985 systemd-resolved[226]: Positive Trust Anchors: May 10 09:59:06.238789 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 09:59:06.238835 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 09:59:06.244636 systemd-resolved[226]: Defaulting to hostname 'linux'. May 10 09:59:06.245655 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 09:59:06.246454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 09:59:06.293316 kernel: SCSI subsystem initialized May 10 09:59:06.304317 kernel: Loading iSCSI transport class v2.0-870. May 10 09:59:06.316767 kernel: iscsi: registered transport (tcp) May 10 09:59:06.339334 kernel: iscsi: registered transport (qla4xxx) May 10 09:59:06.339459 kernel: QLogic iSCSI HBA Driver May 10 09:59:06.365201 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 09:59:06.379743 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 09:59:06.390792 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 09:59:06.482981 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 10 09:59:06.487684 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 10 09:59:06.581523 kernel: raid6: sse2x4 gen() 5107 MB/s May 10 09:59:06.599368 kernel: raid6: sse2x2 gen() 14934 MB/s May 10 09:59:06.617688 kernel: raid6: sse2x1 gen() 10059 MB/s May 10 09:59:06.617818 kernel: raid6: using algorithm sse2x2 gen() 14934 MB/s May 10 09:59:06.636843 kernel: raid6: .... xor() 9454 MB/s, rmw enabled May 10 09:59:06.636923 kernel: raid6: using ssse3x2 recovery algorithm May 10 09:59:06.659357 kernel: xor: measuring software checksum speed May 10 09:59:06.659464 kernel: prefetch64-sse : 14176 MB/sec May 10 09:59:06.661676 kernel: generic_sse : 16846 MB/sec May 10 09:59:06.661732 kernel: xor: using function: generic_sse (16846 MB/sec) May 10 09:59:06.841301 kernel: Btrfs loaded, zoned=no, fsverity=no May 10 09:59:06.850304 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 10 09:59:06.854290 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 09:59:06.901924 systemd-udevd[435]: Using default interface naming scheme 'v255'. May 10 09:59:06.912484 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 09:59:06.916909 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 10 09:59:06.944930 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation May 10 09:59:06.977358 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 10 09:59:06.980603 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 09:59:07.033489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 09:59:07.037375 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 10 09:59:07.126547 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 10 09:59:07.136271 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 10 09:59:07.155233 kernel: libata version 3.00 loaded. May 10 09:59:07.155378 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 09:59:07.155421 kernel: GPT:17805311 != 20971519 May 10 09:59:07.155461 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 09:59:07.155481 kernel: GPT:17805311 != 20971519 May 10 09:59:07.155492 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 09:59:07.155503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:59:07.167346 kernel: ata_piix 0000:00:01.1: version 2.13 May 10 09:59:07.168828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 09:59:07.170166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:59:07.172707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:59:07.177727 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 10 09:59:07.178462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:59:07.180284 kernel: scsi host0: ata_piix May 10 09:59:07.180440 kernel: scsi host1: ata_piix May 10 09:59:07.183653 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 10 09:59:07.183680 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 10 09:59:07.186524 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 09:59:07.220294 kernel: BTRFS: device fsid 447a9416-2d70-470c-8858-df3b82fa5271 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (492) May 10 09:59:07.223677 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 10 09:59:07.267242 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (484) May 10 09:59:07.279520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:59:07.300315 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 10 09:59:07.312737 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 10 09:59:07.313320 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 10 09:59:07.331182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 09:59:07.333427 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 10 09:59:07.342589 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 10 09:59:07.344981 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 10 09:59:07.345814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 09:59:07.347136 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 09:59:07.349350 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 10 09:59:07.351740 disk-uuid[534]: Primary Header is updated. May 10 09:59:07.351740 disk-uuid[534]: Secondary Entries is updated. May 10 09:59:07.351740 disk-uuid[534]: Secondary Header is updated. May 10 09:59:07.360277 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:59:07.373173 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 10 09:59:08.379313 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:59:08.381594 disk-uuid[540]: The operation has completed successfully. May 10 09:59:08.459879 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 09:59:08.460067 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 10 09:59:08.507036 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 10 09:59:08.526393 sh[558]: Success May 10 09:59:08.550545 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 09:59:08.550606 kernel: device-mapper: uevent: version 1.0.3 May 10 09:59:08.552724 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 10 09:59:08.566281 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 10 09:59:08.634007 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 10 09:59:08.638333 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 10 09:59:08.651752 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 10 09:59:08.673880 kernel: BTRFS info (device dm-0): first mount of filesystem 447a9416-2d70-470c-8858-df3b82fa5271 May 10 09:59:08.673963 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 10 09:59:08.678628 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 10 09:59:08.683529 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 10 09:59:08.689632 kernel: BTRFS info (device dm-0): using free space tree May 10 09:59:08.706583 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 10 09:59:08.708597 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 10 09:59:08.710535 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 10 09:59:08.713484 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 10 09:59:08.719506 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 10 09:59:08.773365 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:59:08.773474 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 09:59:08.773506 kernel: BTRFS info (device vda6): using free space tree May 10 09:59:08.785343 kernel: BTRFS info (device vda6): auto enabling async discard May 10 09:59:08.797301 kernel: BTRFS info (device vda6): last unmount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:59:08.804919 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 10 09:59:08.808386 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 10 09:59:08.873678 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 09:59:08.875927 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 09:59:08.913570 systemd-networkd[741]: lo: Link UP May 10 09:59:08.914354 systemd-networkd[741]: lo: Gained carrier May 10 09:59:08.916077 systemd-networkd[741]: Enumeration completed May 10 09:59:08.916420 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 09:59:08.916796 systemd-networkd[741]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:59:08.916801 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 09:59:08.918843 systemd-networkd[741]: eth0: Link UP May 10 09:59:08.918847 systemd-networkd[741]: eth0: Gained carrier May 10 09:59:08.918860 systemd-networkd[741]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:59:08.919778 systemd[1]: Reached target network.target - Network. May 10 09:59:08.935456 systemd-networkd[741]: eth0: DHCPv4 address 172.24.4.22/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 10 09:59:09.019238 ignition[660]: Ignition 2.21.0 May 10 09:59:09.019268 ignition[660]: Stage: fetch-offline May 10 09:59:09.019304 ignition[660]: no configs at "/usr/lib/ignition/base.d" May 10 09:59:09.019314 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 09:59:09.019401 ignition[660]: parsed url from cmdline: "" May 10 09:59:09.019405 ignition[660]: no config URL provided May 10 09:59:09.019410 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" May 10 09:59:09.019419 ignition[660]: no config at "/usr/lib/ignition/user.ign" May 10 09:59:09.024593 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 10 09:59:09.019424 ignition[660]: failed to fetch config: resource requires networking May 10 09:59:09.026937 systemd-resolved[226]: Detected conflict on linux IN A 172.24.4.22 May 10 09:59:09.019581 ignition[660]: Ignition finished successfully May 10 09:59:09.026947 systemd-resolved[226]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 10 09:59:09.028445 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 10 09:59:09.054584 ignition[751]: Ignition 2.21.0 May 10 09:59:09.054597 ignition[751]: Stage: fetch May 10 09:59:09.055974 ignition[751]: no configs at "/usr/lib/ignition/base.d" May 10 09:59:09.055989 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 09:59:09.056071 ignition[751]: parsed url from cmdline: "" May 10 09:59:09.056075 ignition[751]: no config URL provided May 10 09:59:09.056081 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" May 10 09:59:09.056090 ignition[751]: no config at "/usr/lib/ignition/user.ign" May 10 09:59:09.056295 ignition[751]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 10 09:59:09.056306 ignition[751]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 10 09:59:09.056312 ignition[751]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 10 09:59:09.463482 ignition[751]: GET result: OK May 10 09:59:09.463727 ignition[751]: parsing config with SHA512: b6d40c534b20b78788385e8bc9f67d900592b3e7322ad8bd97c0a967edcb1bcc5c650236452d9ad356be737a90e550381e14756d0f832f64abbd03d000dc956a May 10 09:59:09.480572 unknown[751]: fetched base config from "system" May 10 09:59:09.480603 unknown[751]: fetched base config from "system" May 10 09:59:09.481967 ignition[751]: fetch: fetch complete May 10 09:59:09.480623 unknown[751]: fetched user config from "openstack" May 10 09:59:09.481980 ignition[751]: fetch: fetch passed May 10 09:59:09.487454 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 10 09:59:09.482071 ignition[751]: Ignition finished successfully May 10 09:59:09.492556 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 10 09:59:09.545654 ignition[757]: Ignition 2.21.0 May 10 09:59:09.545689 ignition[757]: Stage: kargs May 10 09:59:09.546010 ignition[757]: no configs at "/usr/lib/ignition/base.d" May 10 09:59:09.546037 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 09:59:09.548007 ignition[757]: kargs: kargs passed May 10 09:59:09.550637 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 10 09:59:09.548101 ignition[757]: Ignition finished successfully May 10 09:59:09.557142 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 10 09:59:09.606387 ignition[763]: Ignition 2.21.0 May 10 09:59:09.606412 ignition[763]: Stage: disks May 10 09:59:09.607130 ignition[763]: no configs at "/usr/lib/ignition/base.d" May 10 09:59:09.607161 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 09:59:09.612844 ignition[763]: disks: disks passed May 10 09:59:09.612991 ignition[763]: Ignition finished successfully May 10 09:59:09.617563 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 10 09:59:09.620058 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 10 09:59:09.622000 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 09:59:09.624990 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 09:59:09.627910 systemd[1]: Reached target sysinit.target - System Initialization. May 10 09:59:09.630429 systemd[1]: Reached target basic.target - Basic System. May 10 09:59:09.635174 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 10 09:59:09.681495 systemd-fsck[771]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 10 09:59:09.692179 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 10 09:59:09.696612 systemd[1]: Mounting sysroot.mount - /sysroot... May 10 09:59:09.891278 kernel: EXT4-fs (vda9): mounted filesystem f8cce592-76ea-4219-9560-1ef21b28761f r/w with ordered data mode. Quota mode: none. May 10 09:59:09.891851 systemd[1]: Mounted sysroot.mount - /sysroot. May 10 09:59:09.894520 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 10 09:59:09.898613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 09:59:09.903436 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 10 09:59:09.905818 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 10 09:59:09.912969 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 10 09:59:09.915709 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 09:59:09.915762 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 10 09:59:09.927700 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 10 09:59:09.934471 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 10 09:59:09.944856 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (779) May 10 09:59:09.944885 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:59:09.944898 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 09:59:09.944909 kernel: BTRFS info (device vda6): using free space tree May 10 09:59:09.958405 kernel: BTRFS info (device vda6): auto enabling async discard May 10 09:59:09.964030 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 09:59:10.049444 initrd-setup-root[807]: cut: /sysroot/etc/passwd: No such file or directory May 10 09:59:10.055888 initrd-setup-root[814]: cut: /sysroot/etc/group: No such file or directory May 10 09:59:10.062665 initrd-setup-root[821]: cut: /sysroot/etc/shadow: No such file or directory May 10 09:59:10.068034 initrd-setup-root[828]: cut: /sysroot/etc/gshadow: No such file or directory May 10 09:59:10.175430 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 10 09:59:10.179129 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 10 09:59:10.183141 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 10 09:59:10.195339 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 10 09:59:10.200292 kernel: BTRFS info (device vda6): last unmount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:59:10.225451 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 10 09:59:10.237160 ignition[895]: INFO : Ignition 2.21.0 May 10 09:59:10.237160 ignition[895]: INFO : Stage: mount May 10 09:59:10.238344 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 09:59:10.238344 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 09:59:10.239717 ignition[895]: INFO : mount: mount passed May 10 09:59:10.241313 ignition[895]: INFO : Ignition finished successfully May 10 09:59:10.242579 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 10 09:59:10.743591 systemd-networkd[741]: eth0: Gained IPv6LL May 10 09:59:17.104587 coreos-metadata[781]: May 10 09:59:17.104 WARN failed to locate config-drive, using the metadata service API instead May 10 09:59:17.145857 coreos-metadata[781]: May 10 09:59:17.145 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 10 09:59:17.163071 coreos-metadata[781]: May 10 09:59:17.162 INFO Fetch successful May 10 09:59:17.163071 coreos-metadata[781]: May 10 09:59:17.162 INFO wrote hostname ci-4330-0-0-n-cc41d9e3f6.novalocal to /sysroot/etc/hostname May 10 09:59:17.166154 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 10 09:59:17.166397 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 10 09:59:17.173937 systemd[1]: Starting ignition-files.service - Ignition (files)... May 10 09:59:17.201402 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 09:59:17.232359 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (913) May 10 09:59:17.242860 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:59:17.242930 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 09:59:17.242960 kernel: BTRFS info (device vda6): using free space tree May 10 09:59:17.255356 kernel: BTRFS info (device vda6): auto enabling async discard May 10 09:59:17.260241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 09:59:17.309071 ignition[931]: INFO : Ignition 2.21.0 May 10 09:59:17.309071 ignition[931]: INFO : Stage: files May 10 09:59:17.312099 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 09:59:17.312099 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 09:59:17.312099 ignition[931]: DEBUG : files: compiled without relabeling support, skipping May 10 09:59:17.318218 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 09:59:17.318218 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 09:59:17.318218 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 09:59:17.318218 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 09:59:17.326074 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 09:59:17.326074 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 09:59:17.326074 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 09:59:17.318224 unknown[931]: wrote ssh authorized keys file for user: core May 10 09:59:17.399986 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 09:59:17.710351 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 09:59:17.710351 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 09:59:17.714751 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 09:59:18.476516 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 09:59:19.025582 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 09:59:19.027298 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 09:59:19.027298 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 09:59:19.027298 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 09:59:19.027298 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 09:59:19.027298 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 09:59:19.027298 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 09:59:19.027298 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 09:59:19.040980 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 09:59:19.040980 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 09:59:19.040980 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 09:59:19.040980 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 09:59:19.040980 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 09:59:19.040980 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 09:59:19.040980 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 10 09:59:19.442568 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 10 09:59:20.976611 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 09:59:20.976611 ignition[931]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 10 09:59:20.980465 ignition[931]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 09:59:20.980465 ignition[931]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 09:59:20.980465 ignition[931]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 10 09:59:20.980465 ignition[931]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 10 09:59:20.980465 ignition[931]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 10 09:59:20.980465 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 09:59:20.980465 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 09:59:20.980465 ignition[931]: INFO : files: files passed May 10 09:59:20.980465 ignition[931]: INFO : Ignition finished successfully May 10 09:59:20.981383 systemd[1]: Finished ignition-files.service - Ignition (files). May 10 09:59:20.988496 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 10 09:59:20.996162 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 10 09:59:21.007440 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 09:59:21.008171 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 10 09:59:21.021316 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 09:59:21.021316 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 10 09:59:21.026085 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 09:59:21.024691 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 09:59:21.027050 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 10 09:59:21.031354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 10 09:59:21.082192 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 09:59:21.082431 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 10 09:59:21.084739 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 10 09:59:21.085664 systemd[1]: Reached target initrd.target - Initrd Default Target. May 10 09:59:21.087857 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 10 09:59:21.090385 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 10 09:59:21.115812 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 09:59:21.120861 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 10 09:59:21.152560 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 10 09:59:21.155777 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 09:59:21.158799 systemd[1]: Stopped target timers.target - Timer Units. May 10 09:59:21.160349 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 09:59:21.160636 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 09:59:21.163727 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 10 09:59:21.165473 systemd[1]: Stopped target basic.target - Basic System. May 10 09:59:21.168246 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 10 09:59:21.170752 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 10 09:59:21.173194 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 10 09:59:21.176054 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 10 09:59:21.178946 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 10 09:59:21.181705 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 10 09:59:21.184755 systemd[1]: Stopped target sysinit.target - System Initialization. May 10 09:59:21.187532 systemd[1]: Stopped target local-fs.target - Local File Systems. May 10 09:59:21.190369 systemd[1]: Stopped target swap.target - Swaps. May 10 09:59:21.193023 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 09:59:21.193339 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 10 09:59:21.196394 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 10 09:59:21.198156 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 09:59:21.200640 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 10 09:59:21.201487 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 09:59:21.203603 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 09:59:21.203875 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 10 09:59:21.207790 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 09:59:21.208088 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 09:59:21.209850 systemd[1]: ignition-files.service: Deactivated successfully. May 10 09:59:21.210204 systemd[1]: Stopped ignition-files.service - Ignition (files). May 10 09:59:21.215684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 10 09:59:21.222707 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 10 09:59:21.224011 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 09:59:21.225788 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 10 09:59:21.229308 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 09:59:21.229683 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 10 09:59:21.249509 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 09:59:21.249613 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 10 09:59:21.260288 ignition[985]: INFO : Ignition 2.21.0 May 10 09:59:21.260288 ignition[985]: INFO : Stage: umount May 10 09:59:21.260288 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 09:59:21.260288 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 09:59:21.266164 ignition[985]: INFO : umount: umount passed May 10 09:59:21.266164 ignition[985]: INFO : Ignition finished successfully May 10 09:59:21.263799 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 09:59:21.263934 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 10 09:59:21.264801 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 09:59:21.264851 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 10 09:59:21.265457 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 09:59:21.265501 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 10 09:59:21.267647 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 09:59:21.267688 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 10 09:59:21.268827 systemd[1]: Stopped target network.target - Network. May 10 09:59:21.270652 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 09:59:21.270702 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 10 09:59:21.271327 systemd[1]: Stopped target paths.target - Path Units. May 10 09:59:21.271788 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 09:59:21.273662 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 09:59:21.274497 systemd[1]: Stopped target slices.target - Slice Units. May 10 09:59:21.275005 systemd[1]: Stopped target sockets.target - Socket Units. May 10 09:59:21.275613 systemd[1]: iscsid.socket: Deactivated successfully. May 10 09:59:21.275650 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 10 09:59:21.276154 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 09:59:21.276186 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 09:59:21.278087 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 09:59:21.278155 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 10 09:59:21.279001 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 10 09:59:21.279044 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 10 09:59:21.280156 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 10 09:59:21.281385 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 10 09:59:21.284614 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 09:59:21.285235 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 09:59:21.285354 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 10 09:59:21.288246 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 10 09:59:21.288667 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 09:59:21.288757 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 10 09:59:21.290540 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 09:59:21.290635 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 10 09:59:21.292132 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 10 09:59:21.293597 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 10 09:59:21.294655 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 09:59:21.294703 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 10 09:59:21.299618 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 09:59:21.299666 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 10 09:59:21.303335 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 10 09:59:21.303956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 09:59:21.304011 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 09:59:21.305062 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 09:59:21.305103 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 09:59:21.306354 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 09:59:21.306396 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 10 09:59:21.307660 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 10 09:59:21.307705 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 09:59:21.310220 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 09:59:21.311748 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 09:59:21.311808 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 10 09:59:21.318537 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 09:59:21.318685 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 09:59:21.320346 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 09:59:21.320401 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 10 09:59:21.321904 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 09:59:21.321933 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 10 09:59:21.323637 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 09:59:21.323685 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 10 09:59:21.325387 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 09:59:21.325428 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 10 09:59:21.326420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 09:59:21.326466 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 09:59:21.331362 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 10 09:59:21.331957 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 10 09:59:21.332007 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 10 09:59:21.333421 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 09:59:21.333467 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 09:59:21.334528 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 10 09:59:21.334572 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 09:59:21.336302 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 09:59:21.336346 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 10 09:59:21.337111 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 09:59:21.337154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:59:21.342087 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 10 09:59:21.342150 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 10 09:59:21.342192 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 10 09:59:21.342236 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 09:59:21.342726 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 09:59:21.344328 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 10 09:59:21.347203 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 09:59:21.347390 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 10 09:59:21.348627 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 10 09:59:21.352396 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 10 09:59:21.371676 systemd[1]: Switching root. May 10 09:59:21.410287 systemd-journald[186]: Journal stopped May 10 09:59:23.580985 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). May 10 09:59:23.581045 kernel: SELinux: policy capability network_peer_controls=1 May 10 09:59:23.581063 kernel: SELinux: policy capability open_perms=1 May 10 09:59:23.581078 kernel: SELinux: policy capability extended_socket_class=1 May 10 09:59:23.581090 kernel: SELinux: policy capability always_check_network=0 May 10 09:59:23.581104 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 09:59:23.581118 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 09:59:23.581130 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 09:59:23.581142 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 09:59:23.581158 kernel: audit: type=1403 audit(1746871162.180:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 09:59:23.581173 systemd[1]: Successfully loaded SELinux policy in 75.743ms. May 10 09:59:23.581192 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 26.793ms. May 10 09:59:23.581206 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 10 09:59:23.581219 systemd[1]: Detected virtualization kvm. May 10 09:59:23.581232 systemd[1]: Detected architecture x86-64. May 10 09:59:23.581246 systemd[1]: Detected first boot. May 10 09:59:23.589365 systemd[1]: Hostname set to . May 10 09:59:23.589387 systemd[1]: Initializing machine ID from VM UUID. May 10 09:59:23.589402 zram_generator::config[1030]: No configuration found. May 10 09:59:23.589418 kernel: Guest personality initialized and is inactive May 10 09:59:23.589432 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 10 09:59:23.589445 kernel: Initialized host personality May 10 09:59:23.589458 kernel: NET: Registered PF_VSOCK protocol family May 10 09:59:23.589476 systemd[1]: Populated /etc with preset unit settings. May 10 09:59:23.589492 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 10 09:59:23.589507 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 09:59:23.589525 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 10 09:59:23.589539 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 09:59:23.589554 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 09:59:23.589568 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 09:59:23.589582 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 09:59:23.589596 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 09:59:23.589612 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 09:59:23.589627 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 09:59:23.589641 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 09:59:23.589655 systemd[1]: Created slice user.slice - User and Session Slice. May 10 09:59:23.589669 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 09:59:23.589683 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 09:59:23.589697 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 09:59:23.589712 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 09:59:23.589729 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 09:59:23.589743 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 09:59:23.589757 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 10 09:59:23.589771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 09:59:23.589786 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 09:59:23.589799 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 10 09:59:23.589813 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 10 09:59:23.589829 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 10 09:59:23.589842 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 09:59:23.589857 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 09:59:23.589870 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 09:59:23.589884 systemd[1]: Reached target slices.target - Slice Units. May 10 09:59:23.589897 systemd[1]: Reached target swap.target - Swaps. May 10 09:59:23.589910 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 09:59:23.589924 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 09:59:23.589938 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 10 09:59:23.589954 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 09:59:23.589974 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 09:59:23.589993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 09:59:23.590012 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 09:59:23.590031 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 09:59:23.590053 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 09:59:23.590069 systemd[1]: Mounting media.mount - External Media Directory... May 10 09:59:23.590081 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:59:23.590094 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 09:59:23.590110 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 09:59:23.590123 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 09:59:23.590136 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 09:59:23.590149 systemd[1]: Reached target machines.target - Containers. May 10 09:59:23.590161 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 09:59:23.590174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:59:23.590187 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 09:59:23.590199 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 09:59:23.590214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 09:59:23.590226 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 09:59:23.590238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 09:59:23.592725 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 09:59:23.592750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 09:59:23.592765 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 09:59:23.592778 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 09:59:23.592791 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 10 09:59:23.592805 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 09:59:23.592823 systemd[1]: Stopped systemd-fsck-usr.service. May 10 09:59:23.592837 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:59:23.592852 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 09:59:23.592867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 09:59:23.592881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 09:59:23.592896 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 09:59:23.592911 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 10 09:59:23.592924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 09:59:23.592938 systemd[1]: verity-setup.service: Deactivated successfully. May 10 09:59:23.592951 systemd[1]: Stopped verity-setup.service. May 10 09:59:23.592966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:59:23.592980 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 09:59:23.592993 kernel: loop: module loaded May 10 09:59:23.593007 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 09:59:23.593021 systemd[1]: Mounted media.mount - External Media Directory. May 10 09:59:23.593034 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 09:59:23.593047 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 09:59:23.593065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 09:59:23.593090 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 09:59:23.593109 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 09:59:23.593126 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 09:59:23.593147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 09:59:23.593162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 09:59:23.593176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 09:59:23.593190 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 09:59:23.593204 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 09:59:23.593218 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 09:59:23.593237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 09:59:23.593269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 09:59:23.593284 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 09:59:23.593298 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 10 09:59:23.593312 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 09:59:23.593326 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 09:59:23.593340 kernel: fuse: init (API version 7.39) May 10 09:59:23.593353 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 09:59:23.593368 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 09:59:23.593386 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 10 09:59:23.593439 systemd-journald[1113]: Collecting audit messages is disabled. May 10 09:59:23.593472 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 09:59:23.593487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:59:23.593502 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 09:59:23.593517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 09:59:23.593531 systemd-journald[1113]: Journal started May 10 09:59:23.593562 systemd-journald[1113]: Runtime Journal (/run/log/journal/47e66d72309b4c0e96292d5f48722efa) is 8M, max 78.5M, 70.5M free. May 10 09:59:23.179729 systemd[1]: Queued start job for default target multi-user.target. May 10 09:59:23.191463 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 10 09:59:23.191908 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 09:59:23.603523 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 09:59:23.603567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 09:59:23.606461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 09:59:23.610416 kernel: ACPI: bus type drm_connector registered May 10 09:59:23.619285 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 09:59:23.631357 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 09:59:23.631419 systemd[1]: Started systemd-journald.service - Journal Service. May 10 09:59:23.637356 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 09:59:23.637537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 09:59:23.638279 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 09:59:23.638430 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 09:59:23.639426 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 09:59:23.645308 kernel: loop0: detected capacity change from 0 to 146240 May 10 09:59:23.658650 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 09:59:23.667541 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 09:59:23.678739 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 09:59:23.679553 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 09:59:23.681662 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 10 09:59:23.692595 systemd-journald[1113]: Time spent on flushing to /var/log/journal/47e66d72309b4c0e96292d5f48722efa is 30.510ms for 975 entries. May 10 09:59:23.692595 systemd-journald[1113]: System Journal (/var/log/journal/47e66d72309b4c0e96292d5f48722efa) is 8M, max 584.8M, 576.8M free. May 10 09:59:23.747654 systemd-journald[1113]: Received client request to flush runtime journal. May 10 09:59:23.702662 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 09:59:23.719910 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 09:59:23.731618 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. May 10 09:59:23.731632 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. May 10 09:59:23.740710 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 09:59:23.744368 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 09:59:23.751694 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 09:59:23.765323 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 09:59:23.777036 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 10 09:59:23.790286 kernel: loop1: detected capacity change from 0 to 210664 May 10 09:59:23.838144 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 09:59:23.840363 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 09:59:23.864557 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 10 09:59:23.864580 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 10 09:59:23.868688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 09:59:24.108342 kernel: loop2: detected capacity change from 0 to 113872 May 10 09:59:24.194737 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 09:59:24.198432 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 09:59:24.241720 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 09:59:24.360058 kernel: loop3: detected capacity change from 0 to 8 May 10 09:59:24.382312 kernel: loop4: detected capacity change from 0 to 146240 May 10 09:59:24.463284 kernel: loop5: detected capacity change from 0 to 210664 May 10 09:59:24.519305 kernel: loop6: detected capacity change from 0 to 113872 May 10 09:59:24.551014 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 10 09:59:24.551435 kernel: loop7: detected capacity change from 0 to 8 May 10 09:59:24.551550 (sd-merge)[1195]: Merged extensions into '/usr'. May 10 09:59:24.556306 systemd[1]: Reload requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... May 10 09:59:24.556322 systemd[1]: Reloading... May 10 09:59:24.649286 zram_generator::config[1222]: No configuration found. May 10 09:59:24.839808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:59:24.943790 systemd[1]: Reloading finished in 387 ms. May 10 09:59:24.960430 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 09:59:24.969406 systemd[1]: Starting ensure-sysext.service... May 10 09:59:24.978026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 09:59:25.013420 systemd[1]: Reload requested from client PID 1278 ('systemctl') (unit ensure-sysext.service)... May 10 09:59:25.013440 systemd[1]: Reloading... May 10 09:59:25.032072 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 10 09:59:25.032111 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 10 09:59:25.032403 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 09:59:25.032651 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 09:59:25.033462 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 09:59:25.033756 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. May 10 09:59:25.033820 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. May 10 09:59:25.038552 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. May 10 09:59:25.038563 systemd-tmpfiles[1279]: Skipping /boot May 10 09:59:25.050200 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. May 10 09:59:25.050214 systemd-tmpfiles[1279]: Skipping /boot May 10 09:59:25.121326 zram_generator::config[1310]: No configuration found. May 10 09:59:25.154285 ldconfig[1137]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 09:59:25.243737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:59:25.344038 systemd[1]: Reloading finished in 330 ms. May 10 09:59:25.355784 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 09:59:25.357033 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 09:59:25.363334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 09:59:25.379377 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 09:59:25.382548 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 09:59:25.385550 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 09:59:25.393661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 09:59:25.396471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 09:59:25.405458 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 09:59:25.415020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:59:25.415865 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:59:25.418737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 09:59:25.424914 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 09:59:25.435827 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 09:59:25.438423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:59:25.438578 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:59:25.438708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:59:25.443771 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:59:25.445079 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:59:25.445296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:59:25.445410 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:59:25.451810 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 09:59:25.453313 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:59:25.465606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:59:25.468113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:59:25.476925 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 09:59:25.478295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:59:25.478428 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:59:25.478621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:59:25.483297 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 09:59:25.484597 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 09:59:25.484771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 09:59:25.485756 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 09:59:25.485910 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 09:59:25.492106 systemd[1]: Finished ensure-sysext.service. May 10 09:59:25.493193 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 09:59:25.494474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 09:59:25.504464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 09:59:25.504536 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 09:59:25.508493 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 10 09:59:25.513713 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 09:59:25.517457 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 09:59:25.519649 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 09:59:25.519880 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 09:59:25.543891 augenrules[1405]: No rules May 10 09:59:25.545553 systemd[1]: audit-rules.service: Deactivated successfully. May 10 09:59:25.545820 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 09:59:25.546524 systemd-udevd[1370]: Using default interface naming scheme 'v255'. May 10 09:59:25.555703 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 09:59:25.564536 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 09:59:25.565420 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 09:59:25.577637 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 09:59:25.608508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 09:59:25.612531 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 09:59:25.736829 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 10 09:59:25.737599 systemd[1]: Reached target time-set.target - System Time Set. May 10 09:59:25.753838 systemd-resolved[1369]: Positive Trust Anchors: May 10 09:59:25.753855 systemd-resolved[1369]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 09:59:25.753900 systemd-resolved[1369]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 09:59:25.765319 systemd-resolved[1369]: Using system hostname 'ci-4330-0-0-n-cc41d9e3f6.novalocal'. May 10 09:59:25.768944 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 09:59:25.770313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 09:59:25.771166 systemd[1]: Reached target sysinit.target - System Initialization. May 10 09:59:25.773678 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 09:59:25.774297 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 09:59:25.774840 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 10 09:59:25.775600 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 09:59:25.776196 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 09:59:25.776932 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 09:59:25.779371 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 09:59:25.779428 systemd[1]: Reached target paths.target - Path Units. May 10 09:59:25.780213 systemd[1]: Reached target timers.target - Timer Units. May 10 09:59:25.782350 systemd-networkd[1419]: lo: Link UP May 10 09:59:25.782702 systemd-networkd[1419]: lo: Gained carrier May 10 09:59:25.782855 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 09:59:25.783584 systemd-networkd[1419]: Enumeration completed May 10 09:59:25.785504 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 09:59:25.791743 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 10 09:59:25.794702 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 10 09:59:25.795437 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 10 09:59:25.805912 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 09:59:25.807797 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 10 09:59:25.809075 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 09:59:25.810330 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 09:59:25.812071 systemd[1]: Reached target network.target - Network. May 10 09:59:25.813341 systemd[1]: Reached target sockets.target - Socket Units. May 10 09:59:25.814322 systemd[1]: Reached target basic.target - Basic System. May 10 09:59:25.814845 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 09:59:25.814877 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 09:59:25.817476 systemd[1]: Starting containerd.service - containerd container runtime... May 10 09:59:25.820747 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 10 09:59:25.830462 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 09:59:25.832000 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 09:59:25.838510 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 09:59:25.844102 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 09:59:25.845323 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 09:59:25.852165 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 10 09:59:25.858456 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 09:59:25.858985 jq[1458]: false May 10 09:59:25.862639 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 09:59:25.871358 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 09:59:25.875888 extend-filesystems[1459]: Found loop4 May 10 09:59:25.875888 extend-filesystems[1459]: Found loop5 May 10 09:59:25.875888 extend-filesystems[1459]: Found loop6 May 10 09:59:25.875888 extend-filesystems[1459]: Found loop7 May 10 09:59:25.875888 extend-filesystems[1459]: Found vda May 10 09:59:25.875888 extend-filesystems[1459]: Found vda1 May 10 09:59:25.875888 extend-filesystems[1459]: Found vda2 May 10 09:59:25.875888 extend-filesystems[1459]: Found vda3 May 10 09:59:25.875888 extend-filesystems[1459]: Found usr May 10 09:59:25.875888 extend-filesystems[1459]: Found vda4 May 10 09:59:25.875888 extend-filesystems[1459]: Found vda6 May 10 09:59:25.875888 extend-filesystems[1459]: Found vda7 May 10 09:59:25.875888 extend-filesystems[1459]: Found vda9 May 10 09:59:25.876466 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 09:59:25.884397 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 09:59:25.893497 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 10 09:59:25.902516 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 09:59:25.904992 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 09:59:25.905568 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 09:59:25.910855 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Refreshing passwd entry cache May 10 09:59:25.909862 systemd[1]: Starting update-engine.service - Update Engine... May 10 09:59:25.909685 oslogin_cache_refresh[1462]: Refreshing passwd entry cache May 10 09:59:25.918170 oslogin_cache_refresh[1462]: Failure getting users, quitting May 10 09:59:25.918457 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Failure getting users, quitting May 10 09:59:25.918457 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 10 09:59:25.918457 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Refreshing group entry cache May 10 09:59:25.914043 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 09:59:25.918185 oslogin_cache_refresh[1462]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 10 09:59:25.918224 oslogin_cache_refresh[1462]: Refreshing group entry cache May 10 09:59:25.919283 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Failure getting groups, quitting May 10 09:59:25.919283 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 10 09:59:25.918780 oslogin_cache_refresh[1462]: Failure getting groups, quitting May 10 09:59:25.918788 oslogin_cache_refresh[1462]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 10 09:59:25.924120 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 09:59:25.929487 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 09:59:25.929708 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 09:59:25.930094 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 09:59:25.931307 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 09:59:25.932246 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 10 09:59:25.932463 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 10 09:59:25.935362 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 09:59:25.937316 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 09:59:25.944978 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 10 09:59:25.968155 jq[1472]: true May 10 09:59:25.968239 systemd[1]: motdgen.service: Deactivated successfully. May 10 09:59:25.968844 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 09:59:25.971925 update_engine[1471]: I20250510 09:59:25.971855 1471 main.cc:92] Flatcar Update Engine starting May 10 09:59:25.989593 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 09:59:25.992784 tar[1479]: linux-amd64/helm May 10 09:59:26.011503 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 10 09:59:26.030503 jq[1492]: true May 10 09:59:26.059208 dbus-daemon[1456]: [system] SELinux support is enabled May 10 09:59:26.065927 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 09:59:26.069478 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 09:59:26.069504 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 09:59:26.070053 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 09:59:26.070070 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 09:59:26.084203 systemd[1]: Started update-engine.service - Update Engine. May 10 09:59:26.086511 update_engine[1471]: I20250510 09:59:26.086174 1471 update_check_scheduler.cc:74] Next update check in 7m6s May 10 09:59:26.099243 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 09:59:26.103284 kernel: mousedev: PS/2 mouse device common for all mice May 10 09:59:26.146290 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1424) May 10 09:59:26.160006 bash[1517]: Updated "/home/core/.ssh/authorized_keys" May 10 09:59:26.161284 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 09:59:26.173073 systemd[1]: Starting sshkeys.service... May 10 09:59:26.184280 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 10 09:59:26.192284 kernel: ACPI: button: Power Button [PWRF] May 10 09:59:26.218299 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 10 09:59:26.237807 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 10 09:59:26.242604 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 10 09:59:26.266404 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:59:26.266415 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 09:59:26.276838 systemd-networkd[1419]: eth0: Link UP May 10 09:59:26.276847 systemd-networkd[1419]: eth0: Gained carrier May 10 09:59:26.276877 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:59:26.298347 systemd-networkd[1419]: eth0: DHCPv4 address 172.24.4.22/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 10 09:59:26.301939 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. May 10 09:59:26.319417 systemd-logind[1468]: New seat seat0. May 10 09:59:26.324169 systemd[1]: Started systemd-logind.service - User Login Management. May 10 09:59:26.439360 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 09:59:26.450588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:59:26.506619 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 09:59:26.534281 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 10 09:59:26.537703 systemd-logind[1468]: Watching system buttons on /dev/input/event2 (Power Button) May 10 09:59:26.544309 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 10 09:59:26.552646 kernel: Console: switching to colour dummy device 80x25 May 10 09:59:26.554622 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 10 09:59:26.554651 kernel: [drm] features: -context_init May 10 09:59:26.556599 kernel: [drm] number of scanouts: 1 May 10 09:59:26.556628 kernel: [drm] number of cap sets: 0 May 10 09:59:26.574470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 09:59:26.590867 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 09:59:26.604951 containerd[1491]: time="2025-05-10T09:59:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 10 09:59:26.612579 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 10 09:59:26.612633 containerd[1491]: time="2025-05-10T09:59:26.612143058Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 10 09:59:26.628075 containerd[1491]: time="2025-05-10T09:59:26.628020957Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.592µs" May 10 09:59:26.628146 containerd[1491]: time="2025-05-10T09:59:26.628071512Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 10 09:59:26.628146 containerd[1491]: time="2025-05-10T09:59:26.628102811Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 10 09:59:26.628554 containerd[1491]: time="2025-05-10T09:59:26.628310300Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 10 09:59:26.628554 containerd[1491]: time="2025-05-10T09:59:26.628345576Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 10 09:59:26.628554 containerd[1491]: time="2025-05-10T09:59:26.628388356Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 10 09:59:26.628554 containerd[1491]: time="2025-05-10T09:59:26.628468026Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 10 09:59:26.628554 containerd[1491]: time="2025-05-10T09:59:26.628489506Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 10 09:59:26.628772 containerd[1491]: time="2025-05-10T09:59:26.628735417Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 10 09:59:26.628772 containerd[1491]: time="2025-05-10T09:59:26.628764011Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 10 09:59:26.628826 containerd[1491]: time="2025-05-10T09:59:26.628788327Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 10 09:59:26.628826 containerd[1491]: time="2025-05-10T09:59:26.628805238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 10 09:59:26.629488 containerd[1491]: time="2025-05-10T09:59:26.628902521Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 10 09:59:26.629488 containerd[1491]: time="2025-05-10T09:59:26.629119568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 10 09:59:26.629488 containerd[1491]: time="2025-05-10T09:59:26.629157409Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 10 09:59:26.629488 containerd[1491]: time="2025-05-10T09:59:26.629171465Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 10 09:59:26.629488 containerd[1491]: time="2025-05-10T09:59:26.629199257Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 10 09:59:26.630036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 09:59:26.630337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:59:26.645399 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 10 09:59:26.645470 kernel: Console: switching to colour frame buffer device 160x50 May 10 09:59:26.639938 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 09:59:26.645553 containerd[1491]: time="2025-05-10T09:59:26.637469789Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 10 09:59:26.645553 containerd[1491]: time="2025-05-10T09:59:26.637670796Z" level=info msg="metadata content store policy set" policy=shared May 10 09:59:26.647339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:59:26.653377 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655725788Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655804445Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655824663Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655842086Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655888513Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655902389Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655925903Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655940771Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655954687Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655967411Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655978852Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 10 09:59:26.656005 containerd[1491]: time="2025-05-10T09:59:26.655993400Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656110189Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656134424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656151937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656164491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656176183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656187113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656199867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656213072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656225635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656238179Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 10 09:59:26.656314 containerd[1491]: time="2025-05-10T09:59:26.656277773Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 10 09:59:26.656590 containerd[1491]: time="2025-05-10T09:59:26.656354547Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 10 09:59:26.656590 containerd[1491]: time="2025-05-10T09:59:26.656371248Z" level=info msg="Start snapshots syncer" May 10 09:59:26.656590 containerd[1491]: time="2025-05-10T09:59:26.656396035Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 10 09:59:26.658007 containerd[1491]: time="2025-05-10T09:59:26.656632558Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 10 09:59:26.658007 containerd[1491]: time="2025-05-10T09:59:26.656702099Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657465972Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657601446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657629909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657642703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657655056Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657668421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657680544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657692567Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657715810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657733113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 10 09:59:26.658150 containerd[1491]: time="2025-05-10T09:59:26.657745967Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 10 09:59:26.658395 containerd[1491]: time="2025-05-10T09:59:26.658298423Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 10 09:59:26.658395 containerd[1491]: time="2025-05-10T09:59:26.658323540Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 10 09:59:26.658395 containerd[1491]: time="2025-05-10T09:59:26.658334320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 10 09:59:26.658395 containerd[1491]: time="2025-05-10T09:59:26.658385807Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 10 09:59:26.658485 containerd[1491]: time="2025-05-10T09:59:26.658399603Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 10 09:59:26.658485 containerd[1491]: time="2025-05-10T09:59:26.658416935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 10 09:59:26.658485 containerd[1491]: time="2025-05-10T09:59:26.658431001Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 10 09:59:26.658485 containerd[1491]: time="2025-05-10T09:59:26.658448234Z" level=info msg="runtime interface created" May 10 09:59:26.658485 containerd[1491]: time="2025-05-10T09:59:26.658454055Z" level=info msg="created NRI interface" May 10 09:59:26.658485 containerd[1491]: time="2025-05-10T09:59:26.658462631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 10 09:59:26.658485 containerd[1491]: time="2025-05-10T09:59:26.658474653Z" level=info msg="Connect containerd service" May 10 09:59:26.658624 containerd[1491]: time="2025-05-10T09:59:26.658505561Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 09:59:26.666277 containerd[1491]: time="2025-05-10T09:59:26.659561642Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 09:59:26.694153 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 09:59:26.709758 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 09:59:26.736031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 09:59:26.736531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:59:26.739733 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 09:59:26.744590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:59:26.880365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:59:26.934542 containerd[1491]: time="2025-05-10T09:59:26.934198914Z" level=info msg="Start subscribing containerd event" May 10 09:59:26.934542 containerd[1491]: time="2025-05-10T09:59:26.934432302Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 09:59:26.934542 containerd[1491]: time="2025-05-10T09:59:26.934521750Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 09:59:26.934719 containerd[1491]: time="2025-05-10T09:59:26.934453482Z" level=info msg="Start recovering state" May 10 09:59:26.934719 containerd[1491]: time="2025-05-10T09:59:26.934675177Z" level=info msg="Start event monitor" May 10 09:59:26.934719 containerd[1491]: time="2025-05-10T09:59:26.934694564Z" level=info msg="Start cni network conf syncer for default" May 10 09:59:26.934719 containerd[1491]: time="2025-05-10T09:59:26.934704633Z" level=info msg="Start streaming server" May 10 09:59:26.934719 containerd[1491]: time="2025-05-10T09:59:26.934714551Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 10 09:59:26.934845 containerd[1491]: time="2025-05-10T09:59:26.934723698Z" level=info msg="runtime interface starting up..." May 10 09:59:26.934845 containerd[1491]: time="2025-05-10T09:59:26.934731062Z" level=info msg="starting plugins..." May 10 09:59:26.934845 containerd[1491]: time="2025-05-10T09:59:26.934746181Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 10 09:59:26.934913 containerd[1491]: time="2025-05-10T09:59:26.934856217Z" level=info msg="containerd successfully booted in 0.330246s" May 10 09:59:26.935795 systemd[1]: Started containerd.service - containerd container runtime. May 10 09:59:27.010936 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 09:59:27.035190 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 09:59:27.040285 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 09:59:27.041596 systemd[1]: Started sshd@0-172.24.4.22:22-172.24.4.1:37456.service - OpenSSH per-connection server daemon (172.24.4.1:37456). May 10 09:59:27.071445 systemd[1]: issuegen.service: Deactivated successfully. May 10 09:59:27.072315 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 09:59:27.076914 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 09:59:27.109097 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 09:59:27.114721 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 09:59:27.118629 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 10 09:59:27.119836 systemd[1]: Reached target getty.target - Login Prompts. May 10 09:59:27.155478 tar[1479]: linux-amd64/LICENSE May 10 09:59:27.155754 tar[1479]: linux-amd64/README.md May 10 09:59:27.180468 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 09:59:27.511709 systemd-networkd[1419]: eth0: Gained IPv6LL May 10 09:59:27.512806 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. May 10 09:59:27.517105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 09:59:27.520147 systemd[1]: Reached target network-online.target - Network is Online. May 10 09:59:27.526955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:59:27.531853 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 09:59:27.597992 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 09:59:27.984532 sshd[1592]: Accepted publickey for core from 172.24.4.1 port 37456 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 09:59:27.989997 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:59:28.023623 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 09:59:28.034639 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 09:59:28.045027 systemd-logind[1468]: New session 1 of user core. May 10 09:59:28.063991 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 09:59:28.073371 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 09:59:28.090381 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 09:59:28.095678 systemd-logind[1468]: New session c1 of user core. May 10 09:59:28.255843 systemd[1620]: Queued start job for default target default.target. May 10 09:59:28.264153 systemd[1620]: Created slice app.slice - User Application Slice. May 10 09:59:28.264180 systemd[1620]: Reached target paths.target - Paths. May 10 09:59:28.264218 systemd[1620]: Reached target timers.target - Timers. May 10 09:59:28.265584 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 09:59:28.286378 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 09:59:28.286499 systemd[1620]: Reached target sockets.target - Sockets. May 10 09:59:28.286543 systemd[1620]: Reached target basic.target - Basic System. May 10 09:59:28.286579 systemd[1620]: Reached target default.target - Main User Target. May 10 09:59:28.286604 systemd[1620]: Startup finished in 181ms. May 10 09:59:28.286750 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 09:59:28.297691 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 09:59:28.793808 systemd[1]: Started sshd@1-172.24.4.22:22-172.24.4.1:43162.service - OpenSSH per-connection server daemon (172.24.4.1:43162). May 10 09:59:29.094066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:59:29.110382 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 09:59:29.943185 sshd[1631]: Accepted publickey for core from 172.24.4.1 port 43162 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 09:59:29.944145 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:59:29.957601 systemd-logind[1468]: New session 2 of user core. May 10 09:59:29.966825 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 09:59:30.585832 sshd[1646]: Connection closed by 172.24.4.1 port 43162 May 10 09:59:30.587049 sshd-session[1631]: pam_unix(sshd:session): session closed for user core May 10 09:59:30.607789 systemd[1]: sshd@1-172.24.4.22:22-172.24.4.1:43162.service: Deactivated successfully. May 10 09:59:30.612447 systemd[1]: session-2.scope: Deactivated successfully. May 10 09:59:30.615583 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. May 10 09:59:30.622993 systemd[1]: Started sshd@2-172.24.4.22:22-172.24.4.1:43166.service - OpenSSH per-connection server daemon (172.24.4.1:43166). May 10 09:59:30.630984 systemd-logind[1468]: Removed session 2. May 10 09:59:30.635880 kubelet[1638]: E0510 09:59:30.635848 1638 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 09:59:30.639507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 09:59:30.639812 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 09:59:30.640840 systemd[1]: kubelet.service: Consumed 2.019s CPU time, 248.4M memory peak. May 10 09:59:31.787513 sshd[1652]: Accepted publickey for core from 172.24.4.1 port 43166 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 09:59:31.800185 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:59:31.811332 systemd-logind[1468]: New session 3 of user core. May 10 09:59:31.824814 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 09:59:32.191091 login[1600]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 10 09:59:32.195616 login[1599]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 10 09:59:32.207356 systemd-logind[1468]: New session 4 of user core. May 10 09:59:32.212730 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 09:59:32.222220 systemd-logind[1468]: New session 5 of user core. May 10 09:59:32.238203 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 09:59:32.490730 sshd[1656]: Connection closed by 172.24.4.1 port 43166 May 10 09:59:32.492100 sshd-session[1652]: pam_unix(sshd:session): session closed for user core May 10 09:59:32.500036 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. May 10 09:59:32.500184 systemd[1]: sshd@2-172.24.4.22:22-172.24.4.1:43166.service: Deactivated successfully. May 10 09:59:32.504745 systemd[1]: session-3.scope: Deactivated successfully. May 10 09:59:32.509246 systemd-logind[1468]: Removed session 3. May 10 09:59:33.867720 coreos-metadata[1455]: May 10 09:59:33.867 WARN failed to locate config-drive, using the metadata service API instead May 10 09:59:33.880624 coreos-metadata[1525]: May 10 09:59:33.879 WARN failed to locate config-drive, using the metadata service API instead May 10 09:59:33.919811 coreos-metadata[1455]: May 10 09:59:33.919 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 10 09:59:33.923245 coreos-metadata[1525]: May 10 09:59:33.923 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 10 09:59:34.107909 coreos-metadata[1525]: May 10 09:59:34.107 INFO Fetch successful May 10 09:59:34.107909 coreos-metadata[1525]: May 10 09:59:34.107 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 10 09:59:34.112593 coreos-metadata[1455]: May 10 09:59:34.112 INFO Fetch successful May 10 09:59:34.112908 coreos-metadata[1455]: May 10 09:59:34.112 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 10 09:59:34.121853 coreos-metadata[1525]: May 10 09:59:34.121 INFO Fetch successful May 10 09:59:34.126096 coreos-metadata[1455]: May 10 09:59:34.125 INFO Fetch successful May 10 09:59:34.126096 coreos-metadata[1455]: May 10 09:59:34.126 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 10 09:59:34.130142 unknown[1525]: wrote ssh authorized keys file for user: core May 10 09:59:34.138058 coreos-metadata[1455]: May 10 09:59:34.137 INFO Fetch successful May 10 09:59:34.138058 coreos-metadata[1455]: May 10 09:59:34.138 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 10 09:59:34.150179 coreos-metadata[1455]: May 10 09:59:34.150 INFO Fetch successful May 10 09:59:34.150179 coreos-metadata[1455]: May 10 09:59:34.150 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 10 09:59:34.163122 coreos-metadata[1455]: May 10 09:59:34.163 INFO Fetch successful May 10 09:59:34.163594 coreos-metadata[1455]: May 10 09:59:34.163 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 10 09:59:34.177177 coreos-metadata[1455]: May 10 09:59:34.177 INFO Fetch successful May 10 09:59:34.184850 update-ssh-keys[1689]: Updated "/home/core/.ssh/authorized_keys" May 10 09:59:34.188912 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 10 09:59:34.204824 systemd[1]: Finished sshkeys.service. May 10 09:59:34.239609 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 10 09:59:34.241782 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 09:59:34.242742 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 09:59:34.243647 systemd[1]: Startup finished in 3.810s (kernel) + 16.354s (initrd) + 12.138s (userspace) = 32.303s. May 10 09:59:40.865496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 09:59:40.869740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:59:41.192456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:59:41.210232 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 09:59:41.298105 kubelet[1706]: E0510 09:59:41.297944 1706 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 09:59:41.304762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 09:59:41.304899 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 09:59:41.305179 systemd[1]: kubelet.service: Consumed 307ms CPU time, 96.2M memory peak. May 10 09:59:42.516181 systemd[1]: Started sshd@3-172.24.4.22:22-172.24.4.1:39566.service - OpenSSH per-connection server daemon (172.24.4.1:39566). May 10 09:59:43.886080 sshd[1715]: Accepted publickey for core from 172.24.4.1 port 39566 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 09:59:43.889008 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:59:43.899839 systemd-logind[1468]: New session 6 of user core. May 10 09:59:43.909605 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 09:59:44.506984 sshd[1717]: Connection closed by 172.24.4.1 port 39566 May 10 09:59:44.507046 sshd-session[1715]: pam_unix(sshd:session): session closed for user core May 10 09:59:44.522123 systemd[1]: sshd@3-172.24.4.22:22-172.24.4.1:39566.service: Deactivated successfully. May 10 09:59:44.525688 systemd[1]: session-6.scope: Deactivated successfully. May 10 09:59:44.528044 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. May 10 09:59:44.532242 systemd[1]: Started sshd@4-172.24.4.22:22-172.24.4.1:59642.service - OpenSSH per-connection server daemon (172.24.4.1:59642). May 10 09:59:44.536496 systemd-logind[1468]: Removed session 6. May 10 09:59:45.767747 sshd[1722]: Accepted publickey for core from 172.24.4.1 port 59642 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 09:59:45.770954 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:59:45.784752 systemd-logind[1468]: New session 7 of user core. May 10 09:59:45.794636 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 09:59:46.335312 sshd[1725]: Connection closed by 172.24.4.1 port 59642 May 10 09:59:46.336589 sshd-session[1722]: pam_unix(sshd:session): session closed for user core May 10 09:59:46.356459 systemd[1]: sshd@4-172.24.4.22:22-172.24.4.1:59642.service: Deactivated successfully. May 10 09:59:46.360993 systemd[1]: session-7.scope: Deactivated successfully. May 10 09:59:46.363935 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. May 10 09:59:46.368113 systemd[1]: Started sshd@5-172.24.4.22:22-172.24.4.1:59650.service - OpenSSH per-connection server daemon (172.24.4.1:59650). May 10 09:59:46.379072 systemd-logind[1468]: Removed session 7. May 10 09:59:47.754700 sshd[1730]: Accepted publickey for core from 172.24.4.1 port 59650 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 09:59:47.757536 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:59:47.769211 systemd-logind[1468]: New session 8 of user core. May 10 09:59:47.780633 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 09:59:48.532309 sshd[1733]: Connection closed by 172.24.4.1 port 59650 May 10 09:59:48.533234 sshd-session[1730]: pam_unix(sshd:session): session closed for user core May 10 09:59:48.549562 systemd[1]: sshd@5-172.24.4.22:22-172.24.4.1:59650.service: Deactivated successfully. May 10 09:59:48.553707 systemd[1]: session-8.scope: Deactivated successfully. May 10 09:59:48.557602 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. May 10 09:59:48.560844 systemd[1]: Started sshd@6-172.24.4.22:22-172.24.4.1:59660.service - OpenSSH per-connection server daemon (172.24.4.1:59660). May 10 09:59:48.564097 systemd-logind[1468]: Removed session 8. May 10 09:59:50.008662 sshd[1738]: Accepted publickey for core from 172.24.4.1 port 59660 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 09:59:50.011436 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:59:50.022842 systemd-logind[1468]: New session 9 of user core. May 10 09:59:50.030599 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 09:59:50.521916 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 09:59:50.523611 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:59:50.543549 sudo[1742]: pam_unix(sudo:session): session closed for user root May 10 09:59:51.208481 sshd[1741]: Connection closed by 172.24.4.1 port 59660 May 10 09:59:51.209925 sshd-session[1738]: pam_unix(sshd:session): session closed for user core May 10 09:59:51.229567 systemd[1]: sshd@6-172.24.4.22:22-172.24.4.1:59660.service: Deactivated successfully. May 10 09:59:51.234990 systemd[1]: session-9.scope: Deactivated successfully. May 10 09:59:51.238757 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. May 10 09:59:51.242564 systemd[1]: Started sshd@7-172.24.4.22:22-172.24.4.1:59672.service - OpenSSH per-connection server daemon (172.24.4.1:59672). May 10 09:59:51.246123 systemd-logind[1468]: Removed session 9. May 10 09:59:51.364792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 09:59:51.368688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:59:51.830951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:59:51.842661 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 09:59:51.986595 kubelet[1758]: E0510 09:59:51.986458 1758 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 09:59:51.992911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 09:59:51.993441 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 09:59:51.994667 systemd[1]: kubelet.service: Consumed 290ms CPU time, 95.7M memory peak. May 10 09:59:52.556041 sshd[1747]: Accepted publickey for core from 172.24.4.1 port 59672 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 09:59:52.559003 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:59:52.570980 systemd-logind[1468]: New session 10 of user core. May 10 09:59:52.580628 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 09:59:52.919984 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 09:59:52.920785 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:59:52.929872 sudo[1768]: pam_unix(sudo:session): session closed for user root May 10 09:59:52.940506 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 10 09:59:52.941082 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:59:52.954431 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 09:59:53.013652 augenrules[1790]: No rules May 10 09:59:53.015232 systemd[1]: audit-rules.service: Deactivated successfully. May 10 09:59:53.015477 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 09:59:53.016450 sudo[1767]: pam_unix(sudo:session): session closed for user root May 10 09:59:53.245456 sshd[1766]: Connection closed by 172.24.4.1 port 59672 May 10 09:59:53.246584 sshd-session[1747]: pam_unix(sshd:session): session closed for user core May 10 09:59:53.262066 systemd[1]: sshd@7-172.24.4.22:22-172.24.4.1:59672.service: Deactivated successfully. May 10 09:59:53.265742 systemd[1]: session-10.scope: Deactivated successfully. May 10 09:59:53.269658 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. May 10 09:59:53.272688 systemd[1]: Started sshd@8-172.24.4.22:22-172.24.4.1:36510.service - OpenSSH per-connection server daemon (172.24.4.1:36510). May 10 09:59:53.275932 systemd-logind[1468]: Removed session 10. May 10 09:59:54.441572 sshd[1798]: Accepted publickey for core from 172.24.4.1 port 36510 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 09:59:54.443636 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:59:54.452415 systemd-logind[1468]: New session 11 of user core. May 10 09:59:54.463579 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 09:59:54.763590 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 09:59:54.763964 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:59:55.467938 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 09:59:55.485031 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 09:59:56.107878 dockerd[1820]: time="2025-05-10T09:59:56.107543194Z" level=info msg="Starting up" May 10 09:59:56.109882 dockerd[1820]: time="2025-05-10T09:59:56.109827087Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 10 09:59:56.157141 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport667567712-merged.mount: Deactivated successfully. May 10 09:59:56.199597 dockerd[1820]: time="2025-05-10T09:59:56.199360858Z" level=info msg="Loading containers: start." May 10 09:59:56.221481 kernel: Initializing XFRM netlink socket May 10 09:59:56.544442 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. May 10 09:59:56.602173 systemd-networkd[1419]: docker0: Link UP May 10 09:59:56.611221 dockerd[1820]: time="2025-05-10T09:59:56.611151902Z" level=info msg="Loading containers: done." May 10 09:59:57.499658 systemd-timesyncd[1397]: Contacted time server 64.44.115.65:123 (2.flatcar.pool.ntp.org). May 10 09:59:57.499781 systemd-timesyncd[1397]: Initial clock synchronization to Sat 2025-05-10 09:59:57.499211 UTC. May 10 09:59:57.500614 systemd-resolved[1369]: Clock change detected. Flushing caches. May 10 09:59:57.533337 dockerd[1820]: time="2025-05-10T09:59:57.533252561Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 09:59:57.533551 dockerd[1820]: time="2025-05-10T09:59:57.533482963Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 10 09:59:57.533791 dockerd[1820]: time="2025-05-10T09:59:57.533705551Z" level=info msg="Initializing buildkit" May 10 09:59:57.590539 dockerd[1820]: time="2025-05-10T09:59:57.590107412Z" level=info msg="Completed buildkit initialization" May 10 09:59:57.610913 dockerd[1820]: time="2025-05-10T09:59:57.610827082Z" level=info msg="Daemon has completed initialization" May 10 09:59:57.611585 dockerd[1820]: time="2025-05-10T09:59:57.611472553Z" level=info msg="API listen on /run/docker.sock" May 10 09:59:57.612012 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 09:59:59.418897 containerd[1491]: time="2025-05-10T09:59:59.418820744Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 10:00:00.317498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652751932.mount: Deactivated successfully. May 10 10:00:02.248718 containerd[1491]: time="2025-05-10T10:00:02.248654707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:02.250128 containerd[1491]: time="2025-05-10T10:00:02.249890905Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" May 10 10:00:02.251476 containerd[1491]: time="2025-05-10T10:00:02.251414122Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:02.255228 containerd[1491]: time="2025-05-10T10:00:02.255148295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:02.256445 containerd[1491]: time="2025-05-10T10:00:02.256250863Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.83735555s" May 10 10:00:02.256445 containerd[1491]: time="2025-05-10T10:00:02.256292131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 10 10:00:02.276110 containerd[1491]: time="2025-05-10T10:00:02.276047462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 10:00:02.962649 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 10:00:02.964846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:00:03.083969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:00:03.091893 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 10:00:03.498087 kubelet[2100]: E0510 10:00:03.497999 2100 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 10:00:03.500767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 10:00:03.500911 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 10:00:03.501186 systemd[1]: kubelet.service: Consumed 188ms CPU time, 95.7M memory peak. May 10 10:00:06.009418 containerd[1491]: time="2025-05-10T10:00:06.009228191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:06.041032 containerd[1491]: time="2025-05-10T10:00:06.040934126Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" May 10 10:00:06.099514 containerd[1491]: time="2025-05-10T10:00:06.098960955Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:06.162701 containerd[1491]: time="2025-05-10T10:00:06.162601743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:06.171142 containerd[1491]: time="2025-05-10T10:00:06.171052643Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 3.894928237s" May 10 10:00:06.173052 containerd[1491]: time="2025-05-10T10:00:06.171349279Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 10 10:00:06.215708 containerd[1491]: time="2025-05-10T10:00:06.215605722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 10:00:08.444188 containerd[1491]: time="2025-05-10T10:00:08.443995417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:08.445400 containerd[1491]: time="2025-05-10T10:00:08.445332234Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" May 10 10:00:08.447478 containerd[1491]: time="2025-05-10T10:00:08.447429918Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:08.450951 containerd[1491]: time="2025-05-10T10:00:08.450908793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:08.452139 containerd[1491]: time="2025-05-10T10:00:08.451992375Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.235817435s" May 10 10:00:08.452139 containerd[1491]: time="2025-05-10T10:00:08.452036908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 10 10:00:08.469821 containerd[1491]: time="2025-05-10T10:00:08.469785366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 10:00:09.934536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2896862368.mount: Deactivated successfully. May 10 10:00:10.476598 containerd[1491]: time="2025-05-10T10:00:10.476553779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:10.477797 containerd[1491]: time="2025-05-10T10:00:10.477774839Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" May 10 10:00:10.479237 containerd[1491]: time="2025-05-10T10:00:10.479214830Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:10.481668 containerd[1491]: time="2025-05-10T10:00:10.481646310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:10.482224 containerd[1491]: time="2025-05-10T10:00:10.482179070Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.012357115s" May 10 10:00:10.482271 containerd[1491]: time="2025-05-10T10:00:10.482225176Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 10 10:00:10.501447 containerd[1491]: time="2025-05-10T10:00:10.501401201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 10:00:11.625889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713245831.mount: Deactivated successfully. May 10 10:00:11.806037 update_engine[1471]: I20250510 10:00:11.804446 1471 update_attempter.cc:509] Updating boot flags... May 10 10:00:11.857399 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2164) May 10 10:00:11.921537 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2168) May 10 10:00:12.016413 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2168) May 10 10:00:13.312142 containerd[1491]: time="2025-05-10T10:00:13.311508316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:13.313787 containerd[1491]: time="2025-05-10T10:00:13.313738719Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 10 10:00:13.315454 containerd[1491]: time="2025-05-10T10:00:13.315399955Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:13.320635 containerd[1491]: time="2025-05-10T10:00:13.320571494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:13.321935 containerd[1491]: time="2025-05-10T10:00:13.321757578Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.820313387s" May 10 10:00:13.321935 containerd[1491]: time="2025-05-10T10:00:13.321795740Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 10:00:13.339150 containerd[1491]: time="2025-05-10T10:00:13.339003112Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 10:00:13.712676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 10:00:13.719463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:00:13.900503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:00:13.909617 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 10:00:14.233249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646087989.mount: Deactivated successfully. May 10 10:00:14.243749 containerd[1491]: time="2025-05-10T10:00:14.242758555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:14.246224 containerd[1491]: time="2025-05-10T10:00:14.246131962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" May 10 10:00:14.250834 containerd[1491]: time="2025-05-10T10:00:14.250519792Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:14.257602 kubelet[2219]: E0510 10:00:14.257492 2219 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 10:00:14.260647 containerd[1491]: time="2025-05-10T10:00:14.259180936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:14.260647 containerd[1491]: time="2025-05-10T10:00:14.259984022Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 920.947066ms" May 10 10:00:14.260647 containerd[1491]: time="2025-05-10T10:00:14.260010822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 10 10:00:14.262583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 10:00:14.262727 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 10:00:14.263005 systemd[1]: kubelet.service: Consumed 251ms CPU time, 96M memory peak. May 10 10:00:14.280414 containerd[1491]: time="2025-05-10T10:00:14.280292380Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 10:00:14.945291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076048388.mount: Deactivated successfully. May 10 10:00:17.720644 containerd[1491]: time="2025-05-10T10:00:17.720548960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:17.722299 containerd[1491]: time="2025-05-10T10:00:17.721960488Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" May 10 10:00:17.723574 containerd[1491]: time="2025-05-10T10:00:17.723509272Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:17.727044 containerd[1491]: time="2025-05-10T10:00:17.726965374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:17.728423 containerd[1491]: time="2025-05-10T10:00:17.728155006Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.447605964s" May 10 10:00:17.728423 containerd[1491]: time="2025-05-10T10:00:17.728184721Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 10 10:00:22.795994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:00:22.796488 systemd[1]: kubelet.service: Consumed 251ms CPU time, 96M memory peak. May 10 10:00:22.801419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:00:22.835305 systemd[1]: Reload requested from client PID 2365 ('systemctl') (unit session-11.scope)... May 10 10:00:22.835476 systemd[1]: Reloading... May 10 10:00:22.949487 zram_generator::config[2410]: No configuration found. May 10 10:00:23.687574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 10:00:23.831955 systemd[1]: Reloading finished in 996 ms. May 10 10:00:23.899241 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 10:00:23.899321 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 10:00:23.899820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:00:23.899863 systemd[1]: kubelet.service: Consumed 117ms CPU time, 83.6M memory peak. May 10 10:00:23.903126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:00:24.059427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:00:24.071822 (kubelet)[2477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 10:00:24.125798 kubelet[2477]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 10:00:24.126146 kubelet[2477]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 10:00:24.126196 kubelet[2477]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 10:00:24.126322 kubelet[2477]: I0510 10:00:24.126291 2477 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 10:00:24.522079 kubelet[2477]: I0510 10:00:24.522045 2477 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 10:00:24.522251 kubelet[2477]: I0510 10:00:24.522240 2477 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 10:00:24.522596 kubelet[2477]: I0510 10:00:24.522580 2477 server.go:927] "Client rotation is on, will bootstrap in background" May 10 10:00:24.539157 kubelet[2477]: I0510 10:00:24.539131 2477 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 10:00:24.542381 kubelet[2477]: E0510 10:00:24.542074 2477 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:24.563453 kubelet[2477]: I0510 10:00:24.563348 2477 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 10:00:24.563934 kubelet[2477]: I0510 10:00:24.563861 2477 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 10:00:24.564343 kubelet[2477]: I0510 10:00:24.563932 2477 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4330-0-0-n-cc41d9e3f6.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 10:00:24.564455 kubelet[2477]: I0510 10:00:24.564401 2477 topology_manager.go:138] "Creating topology manager with none policy" May 10 10:00:24.564455 kubelet[2477]: I0510 10:00:24.564428 2477 container_manager_linux.go:301] "Creating device plugin manager" May 10 10:00:24.564702 kubelet[2477]: I0510 10:00:24.564660 2477 state_mem.go:36] "Initialized new in-memory state store" May 10 10:00:24.566923 kubelet[2477]: I0510 10:00:24.566887 2477 kubelet.go:400] "Attempting to sync node with API server" May 10 10:00:24.566982 kubelet[2477]: I0510 10:00:24.566929 2477 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 10:00:24.566982 kubelet[2477]: I0510 10:00:24.566972 2477 kubelet.go:312] "Adding apiserver pod source" May 10 10:00:24.567032 kubelet[2477]: I0510 10:00:24.567003 2477 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 10:00:24.587843 kubelet[2477]: W0510 10:00:24.587523 2477 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:24.589248 kubelet[2477]: E0510 10:00:24.588519 2477 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:24.589248 kubelet[2477]: W0510 10:00:24.588222 2477 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4330-0-0-n-cc41d9e3f6.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:24.589248 kubelet[2477]: E0510 10:00:24.588632 2477 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4330-0-0-n-cc41d9e3f6.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:24.589387 kubelet[2477]: I0510 10:00:24.589210 2477 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 10 10:00:24.593662 kubelet[2477]: I0510 10:00:24.592938 2477 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 10:00:24.593662 kubelet[2477]: W0510 10:00:24.593049 2477 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 10:00:24.594814 kubelet[2477]: I0510 10:00:24.594796 2477 server.go:1264] "Started kubelet" May 10 10:00:24.599706 kubelet[2477]: I0510 10:00:24.599639 2477 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 10:00:24.602007 kubelet[2477]: I0510 10:00:24.601975 2477 server.go:455] "Adding debug handlers to kubelet server" May 10 10:00:24.605075 kubelet[2477]: I0510 10:00:24.604592 2477 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 10:00:24.605075 kubelet[2477]: I0510 10:00:24.604925 2477 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 10:00:24.605352 kubelet[2477]: E0510 10:00:24.605053 2477 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.22:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4330-0-0-n-cc41d9e3f6.novalocal.183e221f9a52922a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4330-0-0-n-cc41d9e3f6.novalocal,UID:ci-4330-0-0-n-cc41d9e3f6.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4330-0-0-n-cc41d9e3f6.novalocal,},FirstTimestamp:2025-05-10 10:00:24.59476433 +0000 UTC m=+0.518498104,LastTimestamp:2025-05-10 10:00:24.59476433 +0000 UTC m=+0.518498104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4330-0-0-n-cc41d9e3f6.novalocal,}" May 10 10:00:24.607575 kubelet[2477]: I0510 10:00:24.607298 2477 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 10:00:24.615430 kubelet[2477]: I0510 10:00:24.613955 2477 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 10:00:24.615430 kubelet[2477]: E0510 10:00:24.614985 2477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4330-0-0-n-cc41d9e3f6.novalocal?timeout=10s\": dial tcp 172.24.4.22:6443: connect: connection refused" interval="200ms" May 10 10:00:24.615430 kubelet[2477]: I0510 10:00:24.615095 2477 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 10:00:24.616855 kubelet[2477]: I0510 10:00:24.616788 2477 reconciler.go:26] "Reconciler: start to sync state" May 10 10:00:24.617431 kubelet[2477]: E0510 10:00:24.617351 2477 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 10:00:24.620117 kubelet[2477]: W0510 10:00:24.620041 2477 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:24.620340 kubelet[2477]: E0510 10:00:24.620315 2477 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:24.621353 kubelet[2477]: I0510 10:00:24.621317 2477 factory.go:221] Registration of the containerd container factory successfully May 10 10:00:24.621569 kubelet[2477]: I0510 10:00:24.621547 2477 factory.go:221] Registration of the systemd container factory successfully May 10 10:00:24.623524 kubelet[2477]: I0510 10:00:24.621815 2477 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 10:00:24.625584 kubelet[2477]: I0510 10:00:24.625526 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 10:00:24.626541 kubelet[2477]: I0510 10:00:24.626504 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 10:00:24.626541 kubelet[2477]: I0510 10:00:24.626528 2477 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 10:00:24.626541 kubelet[2477]: I0510 10:00:24.626549 2477 kubelet.go:2337] "Starting kubelet main sync loop" May 10 10:00:24.626760 kubelet[2477]: E0510 10:00:24.626592 2477 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 10:00:24.632543 kubelet[2477]: W0510 10:00:24.632480 2477 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:24.632543 kubelet[2477]: E0510 10:00:24.632533 2477 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:24.654235 kubelet[2477]: I0510 10:00:24.654183 2477 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 10:00:24.654235 kubelet[2477]: I0510 10:00:24.654213 2477 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 10:00:24.654235 kubelet[2477]: I0510 10:00:24.654228 2477 state_mem.go:36] "Initialized new in-memory state store" May 10 10:00:24.660675 kubelet[2477]: I0510 10:00:24.660641 2477 policy_none.go:49] "None policy: Start" May 10 10:00:24.661471 kubelet[2477]: I0510 10:00:24.661344 2477 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 10:00:24.661471 kubelet[2477]: I0510 10:00:24.661457 2477 state_mem.go:35] "Initializing new in-memory state store" May 10 10:00:24.672112 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 10 10:00:24.683060 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 10 10:00:24.687902 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 10 10:00:24.698978 kubelet[2477]: I0510 10:00:24.698215 2477 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 10:00:24.698978 kubelet[2477]: I0510 10:00:24.698411 2477 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 10:00:24.698978 kubelet[2477]: I0510 10:00:24.698509 2477 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 10:00:24.701995 kubelet[2477]: E0510 10:00:24.701978 2477 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:24.715671 kubelet[2477]: I0510 10:00:24.715655 2477 kubelet_node_status.go:73] "Attempting to register node" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.716264 kubelet[2477]: E0510 10:00:24.716223 2477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.22:6443/api/v1/nodes\": dial tcp 172.24.4.22:6443: connect: connection refused" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.727586 kubelet[2477]: I0510 10:00:24.727563 2477 topology_manager.go:215] "Topology Admit Handler" podUID="196593245d593d95d14c0d8e64ab0ca6" podNamespace="kube-system" podName="kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.729238 kubelet[2477]: I0510 10:00:24.729133 2477 topology_manager.go:215] "Topology Admit Handler" podUID="55c925e5edf8610a669631d429e701e8" podNamespace="kube-system" podName="kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.732467 kubelet[2477]: I0510 10:00:24.732386 2477 topology_manager.go:215] "Topology Admit Handler" podUID="bc3bb21412fa24e00345a51568c8d59f" podNamespace="kube-system" podName="kube-scheduler-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.751018 systemd[1]: Created slice kubepods-burstable-pod55c925e5edf8610a669631d429e701e8.slice - libcontainer container kubepods-burstable-pod55c925e5edf8610a669631d429e701e8.slice. May 10 10:00:24.763581 systemd[1]: Created slice kubepods-burstable-pod196593245d593d95d14c0d8e64ab0ca6.slice - libcontainer container kubepods-burstable-pod196593245d593d95d14c0d8e64ab0ca6.slice. May 10 10:00:24.769279 systemd[1]: Created slice kubepods-burstable-podbc3bb21412fa24e00345a51568c8d59f.slice - libcontainer container kubepods-burstable-podbc3bb21412fa24e00345a51568c8d59f.slice. May 10 10:00:24.817445 kubelet[2477]: E0510 10:00:24.816690 2477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4330-0-0-n-cc41d9e3f6.novalocal?timeout=10s\": dial tcp 172.24.4.22:6443: connect: connection refused" interval="400ms" May 10 10:00:24.819806 kubelet[2477]: I0510 10:00:24.819668 2477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/196593245d593d95d14c0d8e64ab0ca6-k8s-certs\") pod \"kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"196593245d593d95d14c0d8e64ab0ca6\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.819806 kubelet[2477]: I0510 10:00:24.819749 2477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-ca-certs\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.819995 kubelet[2477]: I0510 10:00:24.819934 2477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc3bb21412fa24e00345a51568c8d59f-kubeconfig\") pod \"kube-scheduler-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"bc3bb21412fa24e00345a51568c8d59f\") " pod="kube-system/kube-scheduler-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.820126 kubelet[2477]: I0510 10:00:24.820067 2477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/196593245d593d95d14c0d8e64ab0ca6-ca-certs\") pod \"kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"196593245d593d95d14c0d8e64ab0ca6\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.820239 kubelet[2477]: I0510 10:00:24.820150 2477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/196593245d593d95d14c0d8e64ab0ca6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"196593245d593d95d14c0d8e64ab0ca6\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.820312 kubelet[2477]: I0510 10:00:24.820230 2477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-flexvolume-dir\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.820312 kubelet[2477]: I0510 10:00:24.820282 2477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-k8s-certs\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.820493 kubelet[2477]: I0510 10:00:24.820331 2477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-kubeconfig\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.820746 kubelet[2477]: I0510 10:00:24.820671 2477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.920332 kubelet[2477]: I0510 10:00:24.920155 2477 kubelet_node_status.go:73] "Attempting to register node" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:24.921696 kubelet[2477]: E0510 10:00:24.921612 2477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.22:6443/api/v1/nodes\": dial tcp 172.24.4.22:6443: connect: connection refused" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:25.062744 containerd[1491]: time="2025-05-10T10:00:25.062650550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal,Uid:55c925e5edf8610a669631d429e701e8,Namespace:kube-system,Attempt:0,}" May 10 10:00:25.068183 containerd[1491]: time="2025-05-10T10:00:25.067670595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal,Uid:196593245d593d95d14c0d8e64ab0ca6,Namespace:kube-system,Attempt:0,}" May 10 10:00:25.075628 containerd[1491]: time="2025-05-10T10:00:25.075202632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4330-0-0-n-cc41d9e3f6.novalocal,Uid:bc3bb21412fa24e00345a51568c8d59f,Namespace:kube-system,Attempt:0,}" May 10 10:00:25.217685 kubelet[2477]: E0510 10:00:25.217584 2477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4330-0-0-n-cc41d9e3f6.novalocal?timeout=10s\": dial tcp 172.24.4.22:6443: connect: connection refused" interval="800ms" May 10 10:00:25.325970 kubelet[2477]: I0510 10:00:25.325814 2477 kubelet_node_status.go:73] "Attempting to register node" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:25.326627 kubelet[2477]: E0510 10:00:25.326326 2477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.22:6443/api/v1/nodes\": dial tcp 172.24.4.22:6443: connect: connection refused" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:25.462423 kubelet[2477]: W0510 10:00:25.462244 2477 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:25.462645 kubelet[2477]: E0510 10:00:25.462506 2477 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:25.643954 kubelet[2477]: W0510 10:00:25.643630 2477 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:25.643954 kubelet[2477]: E0510 10:00:25.643761 2477 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:25.667431 kubelet[2477]: W0510 10:00:25.666858 2477 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4330-0-0-n-cc41d9e3f6.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:25.667431 kubelet[2477]: W0510 10:00:25.666936 2477 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:25.667431 kubelet[2477]: E0510 10:00:25.667130 2477 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:25.667431 kubelet[2477]: E0510 10:00:25.667036 2477 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4330-0-0-n-cc41d9e3f6.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.22:6443: connect: connection refused May 10 10:00:25.689787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3512208657.mount: Deactivated successfully. May 10 10:00:25.705513 containerd[1491]: time="2025-05-10T10:00:25.705096197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 10:00:25.708926 containerd[1491]: time="2025-05-10T10:00:25.708828376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 10 10:00:25.711756 containerd[1491]: time="2025-05-10T10:00:25.711657202Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 10:00:25.713334 containerd[1491]: time="2025-05-10T10:00:25.713270668Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 10:00:25.717507 containerd[1491]: time="2025-05-10T10:00:25.716680072Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 10:00:25.717507 containerd[1491]: time="2025-05-10T10:00:25.717244040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 10 10:00:25.720586 containerd[1491]: time="2025-05-10T10:00:25.720459571Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 10 10:00:25.723305 containerd[1491]: time="2025-05-10T10:00:25.723059737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 10:00:25.727952 containerd[1491]: time="2025-05-10T10:00:25.726671722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 652.885166ms" May 10 10:00:25.730319 containerd[1491]: time="2025-05-10T10:00:25.730113637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 662.26642ms" May 10 10:00:25.761430 containerd[1491]: time="2025-05-10T10:00:25.759886195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 666.52637ms" May 10 10:00:25.787436 containerd[1491]: time="2025-05-10T10:00:25.786748566Z" level=info msg="connecting to shim 4df4d3d481fd8465d0ddc16d0a323166b93e53b321916ad98171e184e66c65a7" address="unix:///run/containerd/s/1a126447ad2ab2dc6d1689eccd922b927f82e0d1caa273749ee49653e4b09065" namespace=k8s.io protocol=ttrpc version=3 May 10 10:00:25.815676 containerd[1491]: time="2025-05-10T10:00:25.815624402Z" level=info msg="connecting to shim 8b36cada24d1025200a3089403edec560e87f6dcd7ddadaa5170dd58b055374e" address="unix:///run/containerd/s/0b8a98e068ebee54fedcb87d9bf57bbb1060873b6fc3377236c95ff6ea25e8b4" namespace=k8s.io protocol=ttrpc version=3 May 10 10:00:25.817259 containerd[1491]: time="2025-05-10T10:00:25.817208533Z" level=info msg="connecting to shim 3511290f84cdb5a8a4b511c0aee9119334db481359d52c4e34c0778d86cce0b0" address="unix:///run/containerd/s/e9ee31955c694d5e700db747ec47d7381df32a162f3803e3a530374561b24836" namespace=k8s.io protocol=ttrpc version=3 May 10 10:00:25.842599 systemd[1]: Started cri-containerd-4df4d3d481fd8465d0ddc16d0a323166b93e53b321916ad98171e184e66c65a7.scope - libcontainer container 4df4d3d481fd8465d0ddc16d0a323166b93e53b321916ad98171e184e66c65a7. May 10 10:00:25.858536 systemd[1]: Started cri-containerd-8b36cada24d1025200a3089403edec560e87f6dcd7ddadaa5170dd58b055374e.scope - libcontainer container 8b36cada24d1025200a3089403edec560e87f6dcd7ddadaa5170dd58b055374e. May 10 10:00:25.863783 systemd[1]: Started cri-containerd-3511290f84cdb5a8a4b511c0aee9119334db481359d52c4e34c0778d86cce0b0.scope - libcontainer container 3511290f84cdb5a8a4b511c0aee9119334db481359d52c4e34c0778d86cce0b0. May 10 10:00:25.927398 containerd[1491]: time="2025-05-10T10:00:25.926410091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal,Uid:196593245d593d95d14c0d8e64ab0ca6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4df4d3d481fd8465d0ddc16d0a323166b93e53b321916ad98171e184e66c65a7\"" May 10 10:00:25.932685 containerd[1491]: time="2025-05-10T10:00:25.932381671Z" level=info msg="CreateContainer within sandbox \"4df4d3d481fd8465d0ddc16d0a323166b93e53b321916ad98171e184e66c65a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 10:00:25.953398 containerd[1491]: time="2025-05-10T10:00:25.953286758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal,Uid:55c925e5edf8610a669631d429e701e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b36cada24d1025200a3089403edec560e87f6dcd7ddadaa5170dd58b055374e\"" May 10 10:00:25.955154 containerd[1491]: time="2025-05-10T10:00:25.954919290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4330-0-0-n-cc41d9e3f6.novalocal,Uid:bc3bb21412fa24e00345a51568c8d59f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3511290f84cdb5a8a4b511c0aee9119334db481359d52c4e34c0778d86cce0b0\"" May 10 10:00:25.957644 containerd[1491]: time="2025-05-10T10:00:25.957413317Z" level=info msg="CreateContainer within sandbox \"3511290f84cdb5a8a4b511c0aee9119334db481359d52c4e34c0778d86cce0b0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 10:00:25.957854 containerd[1491]: time="2025-05-10T10:00:25.957818728Z" level=info msg="CreateContainer within sandbox \"8b36cada24d1025200a3089403edec560e87f6dcd7ddadaa5170dd58b055374e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 10:00:25.959264 containerd[1491]: time="2025-05-10T10:00:25.959229554Z" level=info msg="Container eb3eafe221b921be0322c9a1482911e90ab2118e76f123801aeaeb5fd9bdcc27: CDI devices from CRI Config.CDIDevices: []" May 10 10:00:25.972028 containerd[1491]: time="2025-05-10T10:00:25.971984055Z" level=info msg="CreateContainer within sandbox \"4df4d3d481fd8465d0ddc16d0a323166b93e53b321916ad98171e184e66c65a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eb3eafe221b921be0322c9a1482911e90ab2118e76f123801aeaeb5fd9bdcc27\"" May 10 10:00:25.973111 containerd[1491]: time="2025-05-10T10:00:25.973075462Z" level=info msg="StartContainer for \"eb3eafe221b921be0322c9a1482911e90ab2118e76f123801aeaeb5fd9bdcc27\"" May 10 10:00:25.975048 containerd[1491]: time="2025-05-10T10:00:25.975013086Z" level=info msg="connecting to shim eb3eafe221b921be0322c9a1482911e90ab2118e76f123801aeaeb5fd9bdcc27" address="unix:///run/containerd/s/1a126447ad2ab2dc6d1689eccd922b927f82e0d1caa273749ee49653e4b09065" protocol=ttrpc version=3 May 10 10:00:25.979439 containerd[1491]: time="2025-05-10T10:00:25.979397599Z" level=info msg="Container 6cbe12b51f3cfb3905a87a9123b37397f6a85f2346dac9692f1e44d312ce0f08: CDI devices from CRI Config.CDIDevices: []" May 10 10:00:25.983712 containerd[1491]: time="2025-05-10T10:00:25.983586285Z" level=info msg="Container 4b36bf06c42bd026a27de688fa656da513b65c7559193656d5704222b415f7ed: CDI devices from CRI Config.CDIDevices: []" May 10 10:00:25.998580 containerd[1491]: time="2025-05-10T10:00:25.998529992Z" level=info msg="CreateContainer within sandbox \"3511290f84cdb5a8a4b511c0aee9119334db481359d52c4e34c0778d86cce0b0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6cbe12b51f3cfb3905a87a9123b37397f6a85f2346dac9692f1e44d312ce0f08\"" May 10 10:00:25.999659 containerd[1491]: time="2025-05-10T10:00:25.999626709Z" level=info msg="StartContainer for \"6cbe12b51f3cfb3905a87a9123b37397f6a85f2346dac9692f1e44d312ce0f08\"" May 10 10:00:26.000523 systemd[1]: Started cri-containerd-eb3eafe221b921be0322c9a1482911e90ab2118e76f123801aeaeb5fd9bdcc27.scope - libcontainer container eb3eafe221b921be0322c9a1482911e90ab2118e76f123801aeaeb5fd9bdcc27. May 10 10:00:26.003373 containerd[1491]: time="2025-05-10T10:00:26.003287194Z" level=info msg="connecting to shim 6cbe12b51f3cfb3905a87a9123b37397f6a85f2346dac9692f1e44d312ce0f08" address="unix:///run/containerd/s/e9ee31955c694d5e700db747ec47d7381df32a162f3803e3a530374561b24836" protocol=ttrpc version=3 May 10 10:00:26.008120 containerd[1491]: time="2025-05-10T10:00:26.007857716Z" level=info msg="CreateContainer within sandbox \"8b36cada24d1025200a3089403edec560e87f6dcd7ddadaa5170dd58b055374e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b36bf06c42bd026a27de688fa656da513b65c7559193656d5704222b415f7ed\"" May 10 10:00:26.009144 containerd[1491]: time="2025-05-10T10:00:26.009107630Z" level=info msg="StartContainer for \"4b36bf06c42bd026a27de688fa656da513b65c7559193656d5704222b415f7ed\"" May 10 10:00:26.010260 containerd[1491]: time="2025-05-10T10:00:26.010229985Z" level=info msg="connecting to shim 4b36bf06c42bd026a27de688fa656da513b65c7559193656d5704222b415f7ed" address="unix:///run/containerd/s/0b8a98e068ebee54fedcb87d9bf57bbb1060873b6fc3377236c95ff6ea25e8b4" protocol=ttrpc version=3 May 10 10:00:26.018945 kubelet[2477]: E0510 10:00:26.018902 2477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4330-0-0-n-cc41d9e3f6.novalocal?timeout=10s\": dial tcp 172.24.4.22:6443: connect: connection refused" interval="1.6s" May 10 10:00:26.034072 systemd[1]: Started cri-containerd-6cbe12b51f3cfb3905a87a9123b37397f6a85f2346dac9692f1e44d312ce0f08.scope - libcontainer container 6cbe12b51f3cfb3905a87a9123b37397f6a85f2346dac9692f1e44d312ce0f08. May 10 10:00:26.042947 systemd[1]: Started cri-containerd-4b36bf06c42bd026a27de688fa656da513b65c7559193656d5704222b415f7ed.scope - libcontainer container 4b36bf06c42bd026a27de688fa656da513b65c7559193656d5704222b415f7ed. May 10 10:00:26.080019 containerd[1491]: time="2025-05-10T10:00:26.079790746Z" level=info msg="StartContainer for \"eb3eafe221b921be0322c9a1482911e90ab2118e76f123801aeaeb5fd9bdcc27\" returns successfully" May 10 10:00:26.131753 containerd[1491]: time="2025-05-10T10:00:26.130875215Z" level=info msg="StartContainer for \"6cbe12b51f3cfb3905a87a9123b37397f6a85f2346dac9692f1e44d312ce0f08\" returns successfully" May 10 10:00:26.131952 kubelet[2477]: I0510 10:00:26.131426 2477 kubelet_node_status.go:73] "Attempting to register node" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:26.131952 kubelet[2477]: E0510 10:00:26.131799 2477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.22:6443/api/v1/nodes\": dial tcp 172.24.4.22:6443: connect: connection refused" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:26.152991 containerd[1491]: time="2025-05-10T10:00:26.152757455Z" level=info msg="StartContainer for \"4b36bf06c42bd026a27de688fa656da513b65c7559193656d5704222b415f7ed\" returns successfully" May 10 10:00:27.716688 kubelet[2477]: E0510 10:00:27.716617 2477 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:27.735182 kubelet[2477]: I0510 10:00:27.734959 2477 kubelet_node_status.go:73] "Attempting to register node" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:27.782396 kubelet[2477]: I0510 10:00:27.781423 2477 kubelet_node_status.go:76] "Successfully registered node" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:27.800304 kubelet[2477]: E0510 10:00:27.800258 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:27.900832 kubelet[2477]: E0510 10:00:27.900775 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:28.002243 kubelet[2477]: E0510 10:00:28.001644 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:28.101991 kubelet[2477]: E0510 10:00:28.101897 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:28.202985 kubelet[2477]: E0510 10:00:28.202903 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:28.303694 kubelet[2477]: E0510 10:00:28.303456 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:28.404573 kubelet[2477]: E0510 10:00:28.404493 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:28.505505 kubelet[2477]: E0510 10:00:28.505431 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:28.606767 kubelet[2477]: E0510 10:00:28.606576 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" not found" May 10 10:00:28.882215 kubelet[2477]: W0510 10:00:28.881873 2477 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:00:29.599868 kubelet[2477]: I0510 10:00:29.599803 2477 apiserver.go:52] "Watching apiserver" May 10 10:00:29.616001 kubelet[2477]: I0510 10:00:29.615578 2477 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 10:00:30.346937 systemd[1]: Reload requested from client PID 2751 ('systemctl') (unit session-11.scope)... May 10 10:00:30.346980 systemd[1]: Reloading... May 10 10:00:30.471416 zram_generator::config[2796]: No configuration found. May 10 10:00:30.598247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 10:00:30.757986 systemd[1]: Reloading finished in 410 ms. May 10 10:00:30.786354 kubelet[2477]: I0510 10:00:30.786281 2477 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 10:00:30.786732 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:00:30.798522 systemd[1]: kubelet.service: Deactivated successfully. May 10 10:00:30.798780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:00:30.798842 systemd[1]: kubelet.service: Consumed 1.029s CPU time, 114.8M memory peak. May 10 10:00:30.801111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 10:00:30.972741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 10:00:30.983120 (kubelet)[2860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 10:00:31.256088 kubelet[2860]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 10:00:31.256799 kubelet[2860]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 10:00:31.256799 kubelet[2860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 10:00:31.256918 kubelet[2860]: I0510 10:00:31.256823 2860 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 10:00:31.266482 kubelet[2860]: I0510 10:00:31.266419 2860 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 10:00:31.266482 kubelet[2860]: I0510 10:00:31.266463 2860 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 10:00:31.266889 kubelet[2860]: I0510 10:00:31.266836 2860 server.go:927] "Client rotation is on, will bootstrap in background" May 10 10:00:31.270406 kubelet[2860]: I0510 10:00:31.270318 2860 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 10:00:31.273212 kubelet[2860]: I0510 10:00:31.273156 2860 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 10:00:31.286548 kubelet[2860]: I0510 10:00:31.286504 2860 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 10:00:31.286996 kubelet[2860]: I0510 10:00:31.286914 2860 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 10:00:31.287399 kubelet[2860]: I0510 10:00:31.286976 2860 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4330-0-0-n-cc41d9e3f6.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 10:00:31.287653 kubelet[2860]: I0510 10:00:31.287578 2860 topology_manager.go:138] "Creating topology manager with none policy" May 10 10:00:31.287653 kubelet[2860]: I0510 10:00:31.287612 2860 container_manager_linux.go:301] "Creating device plugin manager" May 10 10:00:31.287779 kubelet[2860]: I0510 10:00:31.287691 2860 state_mem.go:36] "Initialized new in-memory state store" May 10 10:00:31.288219 kubelet[2860]: I0510 10:00:31.287870 2860 kubelet.go:400] "Attempting to sync node with API server" May 10 10:00:31.288219 kubelet[2860]: I0510 10:00:31.287905 2860 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 10:00:31.288219 kubelet[2860]: I0510 10:00:31.287944 2860 kubelet.go:312] "Adding apiserver pod source" May 10 10:00:31.288219 kubelet[2860]: I0510 10:00:31.287969 2860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 10:00:31.308855 kubelet[2860]: I0510 10:00:31.306787 2860 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 10 10:00:31.308855 kubelet[2860]: I0510 10:00:31.307161 2860 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 10:00:31.308855 kubelet[2860]: I0510 10:00:31.308118 2860 server.go:1264] "Started kubelet" May 10 10:00:31.320954 kubelet[2860]: I0510 10:00:31.320922 2860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 10:00:31.326441 kubelet[2860]: I0510 10:00:31.326185 2860 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 10:00:31.329999 kubelet[2860]: I0510 10:00:31.329932 2860 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 10:00:31.330318 kubelet[2860]: I0510 10:00:31.330304 2860 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 10:00:31.332006 kubelet[2860]: I0510 10:00:31.331993 2860 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 10:00:31.341977 sudo[2875]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 10:00:31.342314 sudo[2875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 10 10:00:31.347389 kubelet[2860]: I0510 10:00:31.346163 2860 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 10:00:31.349982 kubelet[2860]: I0510 10:00:31.348180 2860 server.go:455] "Adding debug handlers to kubelet server" May 10 10:00:31.353859 kubelet[2860]: I0510 10:00:31.348212 2860 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 10:00:31.354086 kubelet[2860]: I0510 10:00:31.348320 2860 reconciler.go:26] "Reconciler: start to sync state" May 10 10:00:31.357942 kubelet[2860]: E0510 10:00:31.357919 2860 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 10:00:31.359015 kubelet[2860]: I0510 10:00:31.359000 2860 factory.go:221] Registration of the containerd container factory successfully May 10 10:00:31.359567 kubelet[2860]: I0510 10:00:31.359402 2860 factory.go:221] Registration of the systemd container factory successfully May 10 10:00:31.373964 kubelet[2860]: I0510 10:00:31.373871 2860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 10:00:31.377378 kubelet[2860]: I0510 10:00:31.376457 2860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 10:00:31.377378 kubelet[2860]: I0510 10:00:31.376485 2860 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 10:00:31.377378 kubelet[2860]: I0510 10:00:31.376504 2860 kubelet.go:2337] "Starting kubelet main sync loop" May 10 10:00:31.377378 kubelet[2860]: E0510 10:00:31.376544 2860 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 10:00:31.422074 kubelet[2860]: I0510 10:00:31.422039 2860 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 10:00:31.422074 kubelet[2860]: I0510 10:00:31.422063 2860 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 10:00:31.422074 kubelet[2860]: I0510 10:00:31.422080 2860 state_mem.go:36] "Initialized new in-memory state store" May 10 10:00:31.422295 kubelet[2860]: I0510 10:00:31.422215 2860 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 10:00:31.422295 kubelet[2860]: I0510 10:00:31.422227 2860 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 10:00:31.422295 kubelet[2860]: I0510 10:00:31.422244 2860 policy_none.go:49] "None policy: Start" May 10 10:00:31.423448 kubelet[2860]: I0510 10:00:31.422918 2860 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 10:00:31.423448 kubelet[2860]: I0510 10:00:31.422933 2860 state_mem.go:35] "Initializing new in-memory state store" May 10 10:00:31.423448 kubelet[2860]: I0510 10:00:31.423079 2860 state_mem.go:75] "Updated machine memory state" May 10 10:00:31.429011 kubelet[2860]: I0510 10:00:31.428987 2860 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 10:00:31.429501 kubelet[2860]: I0510 10:00:31.429137 2860 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 10:00:31.429501 kubelet[2860]: I0510 10:00:31.429233 2860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 10:00:31.441403 kubelet[2860]: I0510 10:00:31.438522 2860 kubelet_node_status.go:73] "Attempting to register node" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.456420 kubelet[2860]: I0510 10:00:31.456119 2860 kubelet_node_status.go:112] "Node was previously registered" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.457172 kubelet[2860]: I0510 10:00:31.457157 2860 kubelet_node_status.go:76] "Successfully registered node" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.476777 kubelet[2860]: I0510 10:00:31.476641 2860 topology_manager.go:215] "Topology Admit Handler" podUID="196593245d593d95d14c0d8e64ab0ca6" podNamespace="kube-system" podName="kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.477186 kubelet[2860]: I0510 10:00:31.477158 2860 topology_manager.go:215] "Topology Admit Handler" podUID="55c925e5edf8610a669631d429e701e8" podNamespace="kube-system" podName="kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.478546 kubelet[2860]: I0510 10:00:31.478513 2860 topology_manager.go:215] "Topology Admit Handler" podUID="bc3bb21412fa24e00345a51568c8d59f" podNamespace="kube-system" podName="kube-scheduler-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.488552 kubelet[2860]: W0510 10:00:31.488507 2860 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:00:31.490218 kubelet[2860]: W0510 10:00:31.490181 2860 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:00:31.490302 kubelet[2860]: E0510 10:00:31.490226 2860 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.491178 kubelet[2860]: W0510 10:00:31.491022 2860 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:00:31.554908 kubelet[2860]: I0510 10:00:31.554809 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc3bb21412fa24e00345a51568c8d59f-kubeconfig\") pod \"kube-scheduler-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"bc3bb21412fa24e00345a51568c8d59f\") " pod="kube-system/kube-scheduler-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.555681 kubelet[2860]: I0510 10:00:31.555638 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/196593245d593d95d14c0d8e64ab0ca6-k8s-certs\") pod \"kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"196593245d593d95d14c0d8e64ab0ca6\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.555753 kubelet[2860]: I0510 10:00:31.555687 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/196593245d593d95d14c0d8e64ab0ca6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"196593245d593d95d14c0d8e64ab0ca6\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.555753 kubelet[2860]: I0510 10:00:31.555714 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-ca-certs\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.555753 kubelet[2860]: I0510 10:00:31.555735 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-flexvolume-dir\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.555845 kubelet[2860]: I0510 10:00:31.555755 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-k8s-certs\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.555845 kubelet[2860]: I0510 10:00:31.555778 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.555845 kubelet[2860]: I0510 10:00:31.555803 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/196593245d593d95d14c0d8e64ab0ca6-ca-certs\") pod \"kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"196593245d593d95d14c0d8e64ab0ca6\") " pod="kube-system/kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.555845 kubelet[2860]: I0510 10:00:31.555824 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55c925e5edf8610a669631d429e701e8-kubeconfig\") pod \"kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal\" (UID: \"55c925e5edf8610a669631d429e701e8\") " pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:31.958392 sudo[2875]: pam_unix(sudo:session): session closed for user root May 10 10:00:32.290161 kubelet[2860]: I0510 10:00:32.290040 2860 apiserver.go:52] "Watching apiserver" May 10 10:00:32.354670 kubelet[2860]: I0510 10:00:32.354584 2860 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 10:00:32.413457 kubelet[2860]: W0510 10:00:32.412314 2860 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 10:00:32.413457 kubelet[2860]: E0510 10:00:32.412471 2860 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" May 10 10:00:32.471335 kubelet[2860]: I0510 10:00:32.470381 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4330-0-0-n-cc41d9e3f6.novalocal" podStartSLOduration=4.470328346 podStartE2EDuration="4.470328346s" podCreationTimestamp="2025-05-10 10:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:00:32.457456741 +0000 UTC m=+1.466337500" watchObservedRunningTime="2025-05-10 10:00:32.470328346 +0000 UTC m=+1.479209105" May 10 10:00:32.487866 kubelet[2860]: I0510 10:00:32.487475 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4330-0-0-n-cc41d9e3f6.novalocal" podStartSLOduration=1.487455666 podStartE2EDuration="1.487455666s" podCreationTimestamp="2025-05-10 10:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:00:32.47341909 +0000 UTC m=+1.482299849" watchObservedRunningTime="2025-05-10 10:00:32.487455666 +0000 UTC m=+1.496336425" May 10 10:00:35.249602 sudo[1802]: pam_unix(sudo:session): session closed for user root May 10 10:00:35.432597 sshd[1801]: Connection closed by 172.24.4.1 port 36510 May 10 10:00:35.433645 sshd-session[1798]: pam_unix(sshd:session): session closed for user core May 10 10:00:35.445673 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. May 10 10:00:35.446509 systemd[1]: sshd@8-172.24.4.22:22-172.24.4.1:36510.service: Deactivated successfully. May 10 10:00:35.457112 systemd[1]: session-11.scope: Deactivated successfully. May 10 10:00:35.458415 systemd[1]: session-11.scope: Consumed 9.196s CPU time, 292.5M memory peak. May 10 10:00:35.464454 systemd-logind[1468]: Removed session 11. May 10 10:00:38.080174 kubelet[2860]: I0510 10:00:38.080059 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4330-0-0-n-cc41d9e3f6.novalocal" podStartSLOduration=7.080028879 podStartE2EDuration="7.080028879s" podCreationTimestamp="2025-05-10 10:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:00:32.488220274 +0000 UTC m=+1.497101053" watchObservedRunningTime="2025-05-10 10:00:38.080028879 +0000 UTC m=+7.088909688" May 10 10:00:46.473032 kubelet[2860]: I0510 10:00:46.472995 2860 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 10:00:46.474043 kubelet[2860]: I0510 10:00:46.473743 2860 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 10:00:46.474085 containerd[1491]: time="2025-05-10T10:00:46.473447004Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 10:00:47.206922 kubelet[2860]: I0510 10:00:47.206750 2860 topology_manager.go:215] "Topology Admit Handler" podUID="64314a68-7908-45a1-bb4e-e3a13c271b8f" podNamespace="kube-system" podName="kube-proxy-7qmn5" May 10 10:00:47.220548 kubelet[2860]: W0510 10:00:47.220497 2860 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4330-0-0-n-cc41d9e3f6.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-cc41d9e3f6.novalocal' and this object May 10 10:00:47.220683 kubelet[2860]: E0510 10:00:47.220567 2860 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4330-0-0-n-cc41d9e3f6.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-cc41d9e3f6.novalocal' and this object May 10 10:00:47.220683 kubelet[2860]: W0510 10:00:47.220667 2860 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4330-0-0-n-cc41d9e3f6.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-cc41d9e3f6.novalocal' and this object May 10 10:00:47.220768 kubelet[2860]: E0510 10:00:47.220696 2860 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4330-0-0-n-cc41d9e3f6.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-cc41d9e3f6.novalocal' and this object May 10 10:00:47.227095 systemd[1]: Created slice kubepods-besteffort-pod64314a68_7908_45a1_bb4e_e3a13c271b8f.slice - libcontainer container kubepods-besteffort-pod64314a68_7908_45a1_bb4e_e3a13c271b8f.slice. May 10 10:00:47.236907 kubelet[2860]: I0510 10:00:47.236859 2860 topology_manager.go:215] "Topology Admit Handler" podUID="7519c910-415f-4090-b08a-5ae230482243" podNamespace="kube-system" podName="cilium-2r9vj" May 10 10:00:47.245845 systemd[1]: Created slice kubepods-burstable-pod7519c910_415f_4090_b08a_5ae230482243.slice - libcontainer container kubepods-burstable-pod7519c910_415f_4090_b08a_5ae230482243.slice. May 10 10:00:47.270487 kubelet[2860]: I0510 10:00:47.270448 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-host-proc-sys-net\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.270487 kubelet[2860]: I0510 10:00:47.270514 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-host-proc-sys-kernel\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.270487 kubelet[2860]: I0510 10:00:47.270546 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64314a68-7908-45a1-bb4e-e3a13c271b8f-xtables-lock\") pod \"kube-proxy-7qmn5\" (UID: \"64314a68-7908-45a1-bb4e-e3a13c271b8f\") " pod="kube-system/kube-proxy-7qmn5" May 10 10:00:47.270487 kubelet[2860]: I0510 10:00:47.270567 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cilium-cgroup\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.270822 kubelet[2860]: I0510 10:00:47.270587 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-etc-cni-netd\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.270822 kubelet[2860]: I0510 10:00:47.270610 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-xtables-lock\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.270822 kubelet[2860]: I0510 10:00:47.270630 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7519c910-415f-4090-b08a-5ae230482243-clustermesh-secrets\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.270822 kubelet[2860]: I0510 10:00:47.270649 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cilium-run\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.270822 kubelet[2860]: I0510 10:00:47.270669 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-bpf-maps\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.270822 kubelet[2860]: I0510 10:00:47.270687 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cni-path\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.271004 kubelet[2860]: I0510 10:00:47.270712 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7519c910-415f-4090-b08a-5ae230482243-hubble-tls\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.271004 kubelet[2860]: I0510 10:00:47.270731 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlwc\" (UniqueName: \"kubernetes.io/projected/64314a68-7908-45a1-bb4e-e3a13c271b8f-kube-api-access-9nlwc\") pod \"kube-proxy-7qmn5\" (UID: \"64314a68-7908-45a1-bb4e-e3a13c271b8f\") " pod="kube-system/kube-proxy-7qmn5" May 10 10:00:47.271004 kubelet[2860]: I0510 10:00:47.270751 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgg2g\" (UniqueName: \"kubernetes.io/projected/7519c910-415f-4090-b08a-5ae230482243-kube-api-access-pgg2g\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.271004 kubelet[2860]: I0510 10:00:47.270770 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7519c910-415f-4090-b08a-5ae230482243-cilium-config-path\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.271004 kubelet[2860]: I0510 10:00:47.270787 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/64314a68-7908-45a1-bb4e-e3a13c271b8f-kube-proxy\") pod \"kube-proxy-7qmn5\" (UID: \"64314a68-7908-45a1-bb4e-e3a13c271b8f\") " pod="kube-system/kube-proxy-7qmn5" May 10 10:00:47.271167 kubelet[2860]: I0510 10:00:47.270805 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64314a68-7908-45a1-bb4e-e3a13c271b8f-lib-modules\") pod \"kube-proxy-7qmn5\" (UID: \"64314a68-7908-45a1-bb4e-e3a13c271b8f\") " pod="kube-system/kube-proxy-7qmn5" May 10 10:00:47.271167 kubelet[2860]: I0510 10:00:47.270824 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-hostproc\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.271167 kubelet[2860]: I0510 10:00:47.270843 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-lib-modules\") pod \"cilium-2r9vj\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " pod="kube-system/cilium-2r9vj" May 10 10:00:47.540380 kubelet[2860]: I0510 10:00:47.539792 2860 topology_manager.go:215] "Topology Admit Handler" podUID="418f35a5-cb54-4456-8d05-6f4a23bbb187" podNamespace="kube-system" podName="cilium-operator-599987898-n7btx" May 10 10:00:47.550741 systemd[1]: Created slice kubepods-besteffort-pod418f35a5_cb54_4456_8d05_6f4a23bbb187.slice - libcontainer container kubepods-besteffort-pod418f35a5_cb54_4456_8d05_6f4a23bbb187.slice. May 10 10:00:47.573169 kubelet[2860]: I0510 10:00:47.573119 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/418f35a5-cb54-4456-8d05-6f4a23bbb187-cilium-config-path\") pod \"cilium-operator-599987898-n7btx\" (UID: \"418f35a5-cb54-4456-8d05-6f4a23bbb187\") " pod="kube-system/cilium-operator-599987898-n7btx" May 10 10:00:47.573295 kubelet[2860]: I0510 10:00:47.573273 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwh8d\" (UniqueName: \"kubernetes.io/projected/418f35a5-cb54-4456-8d05-6f4a23bbb187-kube-api-access-lwh8d\") pod \"cilium-operator-599987898-n7btx\" (UID: \"418f35a5-cb54-4456-8d05-6f4a23bbb187\") " pod="kube-system/cilium-operator-599987898-n7btx" May 10 10:00:48.439172 containerd[1491]: time="2025-05-10T10:00:48.438881868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qmn5,Uid:64314a68-7908-45a1-bb4e-e3a13c271b8f,Namespace:kube-system,Attempt:0,}" May 10 10:00:48.452649 containerd[1491]: time="2025-05-10T10:00:48.452282359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2r9vj,Uid:7519c910-415f-4090-b08a-5ae230482243,Namespace:kube-system,Attempt:0,}" May 10 10:00:48.456192 containerd[1491]: time="2025-05-10T10:00:48.455799039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-n7btx,Uid:418f35a5-cb54-4456-8d05-6f4a23bbb187,Namespace:kube-system,Attempt:0,}" May 10 10:00:48.898860 containerd[1491]: time="2025-05-10T10:00:48.898566482Z" level=info msg="connecting to shim b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e" address="unix:///run/containerd/s/1ed73713c5c87af491aa9f1a4dc125816da0c86992b7f3d0fbd12f5c592a07c7" namespace=k8s.io protocol=ttrpc version=3 May 10 10:00:48.908037 containerd[1491]: time="2025-05-10T10:00:48.907885225Z" level=info msg="connecting to shim b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2" address="unix:///run/containerd/s/cc458fdd2289b35c81aeb96640300bf1ac398117566c639e412bbfd8f9dbac6f" namespace=k8s.io protocol=ttrpc version=3 May 10 10:00:48.909838 containerd[1491]: time="2025-05-10T10:00:48.909762542Z" level=info msg="connecting to shim 3d3267607487f3d83c6ea3c852ec2d378d9942387a3b7f27430957fb3c692ade" address="unix:///run/containerd/s/dc29876a219b0d6fdfaf4b487e1472a21f4886500b8d70d3441de539987fe2d4" namespace=k8s.io protocol=ttrpc version=3 May 10 10:00:48.955262 systemd[1]: Started cri-containerd-3d3267607487f3d83c6ea3c852ec2d378d9942387a3b7f27430957fb3c692ade.scope - libcontainer container 3d3267607487f3d83c6ea3c852ec2d378d9942387a3b7f27430957fb3c692ade. May 10 10:00:48.969547 systemd[1]: Started cri-containerd-b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2.scope - libcontainer container b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2. May 10 10:00:48.970840 systemd[1]: Started cri-containerd-b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e.scope - libcontainer container b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e. May 10 10:00:49.017977 containerd[1491]: time="2025-05-10T10:00:49.017848155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2r9vj,Uid:7519c910-415f-4090-b08a-5ae230482243,Namespace:kube-system,Attempt:0,} returns sandbox id \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\"" May 10 10:00:49.022434 containerd[1491]: time="2025-05-10T10:00:49.021628330Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 10:00:49.023417 containerd[1491]: time="2025-05-10T10:00:49.021718000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qmn5,Uid:64314a68-7908-45a1-bb4e-e3a13c271b8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d3267607487f3d83c6ea3c852ec2d378d9942387a3b7f27430957fb3c692ade\"" May 10 10:00:49.031036 containerd[1491]: time="2025-05-10T10:00:49.030995028Z" level=info msg="CreateContainer within sandbox \"3d3267607487f3d83c6ea3c852ec2d378d9942387a3b7f27430957fb3c692ade\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 10:00:49.046705 containerd[1491]: time="2025-05-10T10:00:49.046666246Z" level=info msg="Container 32c44cca0b6ae0c57561a7f6780537ccda20bd7c736350da3f238b13d8b55c76: CDI devices from CRI Config.CDIDevices: []" May 10 10:00:49.062403 containerd[1491]: time="2025-05-10T10:00:49.061972248Z" level=info msg="CreateContainer within sandbox \"3d3267607487f3d83c6ea3c852ec2d378d9942387a3b7f27430957fb3c692ade\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"32c44cca0b6ae0c57561a7f6780537ccda20bd7c736350da3f238b13d8b55c76\"" May 10 10:00:49.064947 containerd[1491]: time="2025-05-10T10:00:49.064924939Z" level=info msg="StartContainer for \"32c44cca0b6ae0c57561a7f6780537ccda20bd7c736350da3f238b13d8b55c76\"" May 10 10:00:49.071301 containerd[1491]: time="2025-05-10T10:00:49.071262771Z" level=info msg="connecting to shim 32c44cca0b6ae0c57561a7f6780537ccda20bd7c736350da3f238b13d8b55c76" address="unix:///run/containerd/s/dc29876a219b0d6fdfaf4b487e1472a21f4886500b8d70d3441de539987fe2d4" protocol=ttrpc version=3 May 10 10:00:49.073389 containerd[1491]: time="2025-05-10T10:00:49.071299832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-n7btx,Uid:418f35a5-cb54-4456-8d05-6f4a23bbb187,Namespace:kube-system,Attempt:0,} returns sandbox id \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\"" May 10 10:00:49.098530 systemd[1]: Started cri-containerd-32c44cca0b6ae0c57561a7f6780537ccda20bd7c736350da3f238b13d8b55c76.scope - libcontainer container 32c44cca0b6ae0c57561a7f6780537ccda20bd7c736350da3f238b13d8b55c76. May 10 10:00:49.144181 containerd[1491]: time="2025-05-10T10:00:49.143574406Z" level=info msg="StartContainer for \"32c44cca0b6ae0c57561a7f6780537ccda20bd7c736350da3f238b13d8b55c76\" returns successfully" May 10 10:00:51.407153 kubelet[2860]: I0510 10:00:51.405639 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7qmn5" podStartSLOduration=4.405607757 podStartE2EDuration="4.405607757s" podCreationTimestamp="2025-05-10 10:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:00:49.480014639 +0000 UTC m=+18.488895398" watchObservedRunningTime="2025-05-10 10:00:51.405607757 +0000 UTC m=+20.414488526" May 10 10:00:54.759193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852166893.mount: Deactivated successfully. May 10 10:00:57.845109 containerd[1491]: time="2025-05-10T10:00:57.845018152Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:57.852008 containerd[1491]: time="2025-05-10T10:00:57.851905664Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 10 10:00:57.855555 containerd[1491]: time="2025-05-10T10:00:57.854637713Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:00:57.858925 containerd[1491]: time="2025-05-10T10:00:57.858856485Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.836068449s" May 10 10:00:57.859150 containerd[1491]: time="2025-05-10T10:00:57.859101900Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 10:00:57.861891 containerd[1491]: time="2025-05-10T10:00:57.861843337Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 10:00:57.865422 containerd[1491]: time="2025-05-10T10:00:57.864997475Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 10:00:57.882588 containerd[1491]: time="2025-05-10T10:00:57.882530780Z" level=info msg="Container b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55: CDI devices from CRI Config.CDIDevices: []" May 10 10:00:57.907254 containerd[1491]: time="2025-05-10T10:00:57.907187483Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\"" May 10 10:00:57.908895 containerd[1491]: time="2025-05-10T10:00:57.908558568Z" level=info msg="StartContainer for \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\"" May 10 10:00:57.910429 containerd[1491]: time="2025-05-10T10:00:57.910306475Z" level=info msg="connecting to shim b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55" address="unix:///run/containerd/s/cc458fdd2289b35c81aeb96640300bf1ac398117566c639e412bbfd8f9dbac6f" protocol=ttrpc version=3 May 10 10:00:57.953523 systemd[1]: Started cri-containerd-b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55.scope - libcontainer container b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55. May 10 10:00:57.988022 containerd[1491]: time="2025-05-10T10:00:57.987957530Z" level=info msg="StartContainer for \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\" returns successfully" May 10 10:00:57.999222 systemd[1]: cri-containerd-b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55.scope: Deactivated successfully. May 10 10:00:58.003006 containerd[1491]: time="2025-05-10T10:00:58.002883721Z" level=info msg="received exit event container_id:\"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\" id:\"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\" pid:3264 exited_at:{seconds:1746871258 nanos:2255232}" May 10 10:00:58.003006 containerd[1491]: time="2025-05-10T10:00:58.002966036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\" id:\"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\" pid:3264 exited_at:{seconds:1746871258 nanos:2255232}" May 10 10:00:58.021830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55-rootfs.mount: Deactivated successfully. May 10 10:01:00.506085 containerd[1491]: time="2025-05-10T10:01:00.504974078Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 10:01:00.532766 containerd[1491]: time="2025-05-10T10:01:00.531607493Z" level=info msg="Container 3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6: CDI devices from CRI Config.CDIDevices: []" May 10 10:01:00.543775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439842354.mount: Deactivated successfully. May 10 10:01:00.560715 containerd[1491]: time="2025-05-10T10:01:00.560639320Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\"" May 10 10:01:00.562831 containerd[1491]: time="2025-05-10T10:01:00.561703570Z" level=info msg="StartContainer for \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\"" May 10 10:01:00.564756 containerd[1491]: time="2025-05-10T10:01:00.564689201Z" level=info msg="connecting to shim 3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6" address="unix:///run/containerd/s/cc458fdd2289b35c81aeb96640300bf1ac398117566c639e412bbfd8f9dbac6f" protocol=ttrpc version=3 May 10 10:01:00.604528 systemd[1]: Started cri-containerd-3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6.scope - libcontainer container 3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6. May 10 10:01:00.645533 containerd[1491]: time="2025-05-10T10:01:00.645443361Z" level=info msg="StartContainer for \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\" returns successfully" May 10 10:01:00.655811 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 10:01:00.657011 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 10:01:00.657267 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 10 10:01:00.659452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 10:01:00.662398 containerd[1491]: time="2025-05-10T10:01:00.662265340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\" id:\"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\" pid:3310 exited_at:{seconds:1746871260 nanos:661960775}" May 10 10:01:00.662398 containerd[1491]: time="2025-05-10T10:01:00.662327248Z" level=info msg="received exit event container_id:\"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\" id:\"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\" pid:3310 exited_at:{seconds:1746871260 nanos:661960775}" May 10 10:01:00.662535 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 10:01:00.663700 systemd[1]: cri-containerd-3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6.scope: Deactivated successfully. May 10 10:01:00.688591 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 10:01:01.503624 containerd[1491]: time="2025-05-10T10:01:01.503554778Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 10:01:01.528259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6-rootfs.mount: Deactivated successfully. May 10 10:01:01.534317 containerd[1491]: time="2025-05-10T10:01:01.534282433Z" level=info msg="Container bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098: CDI devices from CRI Config.CDIDevices: []" May 10 10:01:01.561511 containerd[1491]: time="2025-05-10T10:01:01.561474540Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\"" May 10 10:01:01.562379 containerd[1491]: time="2025-05-10T10:01:01.562234655Z" level=info msg="StartContainer for \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\"" May 10 10:01:01.564307 containerd[1491]: time="2025-05-10T10:01:01.564283424Z" level=info msg="connecting to shim bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098" address="unix:///run/containerd/s/cc458fdd2289b35c81aeb96640300bf1ac398117566c639e412bbfd8f9dbac6f" protocol=ttrpc version=3 May 10 10:01:01.598528 systemd[1]: Started cri-containerd-bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098.scope - libcontainer container bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098. May 10 10:01:01.658058 systemd[1]: cri-containerd-bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098.scope: Deactivated successfully. May 10 10:01:01.665392 containerd[1491]: time="2025-05-10T10:01:01.664473106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\" id:\"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\" pid:3369 exited_at:{seconds:1746871261 nanos:660222760}" May 10 10:01:01.665931 containerd[1491]: time="2025-05-10T10:01:01.665913205Z" level=info msg="received exit event container_id:\"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\" id:\"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\" pid:3369 exited_at:{seconds:1746871261 nanos:660222760}" May 10 10:01:01.669468 containerd[1491]: time="2025-05-10T10:01:01.669449734Z" level=info msg="StartContainer for \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\" returns successfully" May 10 10:01:01.704979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098-rootfs.mount: Deactivated successfully. May 10 10:01:02.248606 containerd[1491]: time="2025-05-10T10:01:02.248568076Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:01:02.250873 containerd[1491]: time="2025-05-10T10:01:02.250851656Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 10 10:01:02.252427 containerd[1491]: time="2025-05-10T10:01:02.252405379Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 10:01:02.253948 containerd[1491]: time="2025-05-10T10:01:02.253907384Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.390971811s" May 10 10:01:02.254005 containerd[1491]: time="2025-05-10T10:01:02.253949043Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 10:01:02.257218 containerd[1491]: time="2025-05-10T10:01:02.257152299Z" level=info msg="CreateContainer within sandbox \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 10:01:02.269674 containerd[1491]: time="2025-05-10T10:01:02.269604322Z" level=info msg="Container 4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53: CDI devices from CRI Config.CDIDevices: []" May 10 10:01:02.283174 containerd[1491]: time="2025-05-10T10:01:02.283080249Z" level=info msg="CreateContainer within sandbox \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\"" May 10 10:01:02.284585 containerd[1491]: time="2025-05-10T10:01:02.283436121Z" level=info msg="StartContainer for \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\"" May 10 10:01:02.285256 containerd[1491]: time="2025-05-10T10:01:02.285215921Z" level=info msg="connecting to shim 4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53" address="unix:///run/containerd/s/1ed73713c5c87af491aa9f1a4dc125816da0c86992b7f3d0fbd12f5c592a07c7" protocol=ttrpc version=3 May 10 10:01:02.306513 systemd[1]: Started cri-containerd-4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53.scope - libcontainer container 4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53. May 10 10:01:02.343101 containerd[1491]: time="2025-05-10T10:01:02.343071705Z" level=info msg="StartContainer for \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" returns successfully" May 10 10:01:02.511856 containerd[1491]: time="2025-05-10T10:01:02.511754730Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 10:01:02.532104 containerd[1491]: time="2025-05-10T10:01:02.531447782Z" level=info msg="Container 02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857: CDI devices from CRI Config.CDIDevices: []" May 10 10:01:02.547893 containerd[1491]: time="2025-05-10T10:01:02.547583498Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\"" May 10 10:01:02.553470 containerd[1491]: time="2025-05-10T10:01:02.552472877Z" level=info msg="StartContainer for \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\"" May 10 10:01:02.553651 containerd[1491]: time="2025-05-10T10:01:02.553629680Z" level=info msg="connecting to shim 02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857" address="unix:///run/containerd/s/cc458fdd2289b35c81aeb96640300bf1ac398117566c639e412bbfd8f9dbac6f" protocol=ttrpc version=3 May 10 10:01:02.574482 kubelet[2860]: I0510 10:01:02.574425 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-n7btx" podStartSLOduration=2.398214402 podStartE2EDuration="15.574406537s" podCreationTimestamp="2025-05-10 10:00:47 +0000 UTC" firstStartedPulling="2025-05-10 10:00:49.078731698 +0000 UTC m=+18.087612467" lastFinishedPulling="2025-05-10 10:01:02.254923843 +0000 UTC m=+31.263804602" observedRunningTime="2025-05-10 10:01:02.569240496 +0000 UTC m=+31.578121255" watchObservedRunningTime="2025-05-10 10:01:02.574406537 +0000 UTC m=+31.583287296" May 10 10:01:02.601911 systemd[1]: Started cri-containerd-02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857.scope - libcontainer container 02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857. May 10 10:01:02.690794 systemd[1]: cri-containerd-02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857.scope: Deactivated successfully. May 10 10:01:02.691788 containerd[1491]: time="2025-05-10T10:01:02.691751991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\" id:\"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\" pid:3445 exited_at:{seconds:1746871262 nanos:691291973}" May 10 10:01:02.692004 containerd[1491]: time="2025-05-10T10:01:02.691886365Z" level=info msg="received exit event container_id:\"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\" id:\"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\" pid:3445 exited_at:{seconds:1746871262 nanos:691291973}" May 10 10:01:02.697496 containerd[1491]: time="2025-05-10T10:01:02.696506997Z" level=info msg="StartContainer for \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\" returns successfully" May 10 10:01:02.720788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857-rootfs.mount: Deactivated successfully. May 10 10:01:03.532897 containerd[1491]: time="2025-05-10T10:01:03.532212160Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 10:01:03.571666 containerd[1491]: time="2025-05-10T10:01:03.571597762Z" level=info msg="Container 70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a: CDI devices from CRI Config.CDIDevices: []" May 10 10:01:03.602387 containerd[1491]: time="2025-05-10T10:01:03.602264703Z" level=info msg="CreateContainer within sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\"" May 10 10:01:03.602906 containerd[1491]: time="2025-05-10T10:01:03.602865477Z" level=info msg="StartContainer for \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\"" May 10 10:01:03.604961 containerd[1491]: time="2025-05-10T10:01:03.604923440Z" level=info msg="connecting to shim 70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a" address="unix:///run/containerd/s/cc458fdd2289b35c81aeb96640300bf1ac398117566c639e412bbfd8f9dbac6f" protocol=ttrpc version=3 May 10 10:01:03.629529 systemd[1]: Started cri-containerd-70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a.scope - libcontainer container 70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a. May 10 10:01:03.672753 containerd[1491]: time="2025-05-10T10:01:03.672685953Z" level=info msg="StartContainer for \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" returns successfully" May 10 10:01:03.732282 containerd[1491]: time="2025-05-10T10:01:03.732192319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" id:\"e9f1baeeb4be137106698637528bc6f86e40d547a999b446a77b1f1ef9355c8d\" pid:3512 exited_at:{seconds:1746871263 nanos:731716421}" May 10 10:01:03.746388 kubelet[2860]: I0510 10:01:03.745574 2860 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 10:01:03.782129 kubelet[2860]: I0510 10:01:03.782078 2860 topology_manager.go:215] "Topology Admit Handler" podUID="aeb43068-2da6-429a-8c3a-56b66c682398" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6kltv" May 10 10:01:03.788041 kubelet[2860]: I0510 10:01:03.787958 2860 topology_manager.go:215] "Topology Admit Handler" podUID="bfdfdf31-512f-447f-bc05-2dbea8a003f7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wvc5q" May 10 10:01:03.792410 systemd[1]: Created slice kubepods-burstable-podaeb43068_2da6_429a_8c3a_56b66c682398.slice - libcontainer container kubepods-burstable-podaeb43068_2da6_429a_8c3a_56b66c682398.slice. May 10 10:01:03.795890 kubelet[2860]: W0510 10:01:03.795870 2860 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4330-0-0-n-cc41d9e3f6.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-cc41d9e3f6.novalocal' and this object May 10 10:01:03.796029 kubelet[2860]: E0510 10:01:03.796014 2860 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4330-0-0-n-cc41d9e3f6.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4330-0-0-n-cc41d9e3f6.novalocal' and this object May 10 10:01:03.806487 systemd[1]: Created slice kubepods-burstable-podbfdfdf31_512f_447f_bc05_2dbea8a003f7.slice - libcontainer container kubepods-burstable-podbfdfdf31_512f_447f_bc05_2dbea8a003f7.slice. May 10 10:01:03.894916 kubelet[2860]: I0510 10:01:03.894875 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4mzw\" (UniqueName: \"kubernetes.io/projected/aeb43068-2da6-429a-8c3a-56b66c682398-kube-api-access-n4mzw\") pod \"coredns-7db6d8ff4d-6kltv\" (UID: \"aeb43068-2da6-429a-8c3a-56b66c682398\") " pod="kube-system/coredns-7db6d8ff4d-6kltv" May 10 10:01:03.895551 kubelet[2860]: I0510 10:01:03.895414 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfdfdf31-512f-447f-bc05-2dbea8a003f7-config-volume\") pod \"coredns-7db6d8ff4d-wvc5q\" (UID: \"bfdfdf31-512f-447f-bc05-2dbea8a003f7\") " pod="kube-system/coredns-7db6d8ff4d-wvc5q" May 10 10:01:03.895551 kubelet[2860]: I0510 10:01:03.895444 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zvsv\" (UniqueName: \"kubernetes.io/projected/bfdfdf31-512f-447f-bc05-2dbea8a003f7-kube-api-access-6zvsv\") pod \"coredns-7db6d8ff4d-wvc5q\" (UID: \"bfdfdf31-512f-447f-bc05-2dbea8a003f7\") " pod="kube-system/coredns-7db6d8ff4d-wvc5q" May 10 10:01:03.895551 kubelet[2860]: I0510 10:01:03.895513 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb43068-2da6-429a-8c3a-56b66c682398-config-volume\") pod \"coredns-7db6d8ff4d-6kltv\" (UID: \"aeb43068-2da6-429a-8c3a-56b66c682398\") " pod="kube-system/coredns-7db6d8ff4d-6kltv" May 10 10:01:04.572593 kubelet[2860]: I0510 10:01:04.571442 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2r9vj" podStartSLOduration=8.730601136 podStartE2EDuration="17.571276115s" podCreationTimestamp="2025-05-10 10:00:47 +0000 UTC" firstStartedPulling="2025-05-10 10:00:49.020567611 +0000 UTC m=+18.029448380" lastFinishedPulling="2025-05-10 10:00:57.86124255 +0000 UTC m=+26.870123359" observedRunningTime="2025-05-10 10:01:04.570833631 +0000 UTC m=+33.579714471" watchObservedRunningTime="2025-05-10 10:01:04.571276115 +0000 UTC m=+33.580156924" May 10 10:01:04.997014 kubelet[2860]: E0510 10:01:04.996804 2860 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 10 10:01:04.997014 kubelet[2860]: E0510 10:01:04.996852 2860 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 10 10:01:04.997014 kubelet[2860]: E0510 10:01:04.996969 2860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bfdfdf31-512f-447f-bc05-2dbea8a003f7-config-volume podName:bfdfdf31-512f-447f-bc05-2dbea8a003f7 nodeName:}" failed. No retries permitted until 2025-05-10 10:01:05.496921516 +0000 UTC m=+34.505802325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bfdfdf31-512f-447f-bc05-2dbea8a003f7-config-volume") pod "coredns-7db6d8ff4d-wvc5q" (UID: "bfdfdf31-512f-447f-bc05-2dbea8a003f7") : failed to sync configmap cache: timed out waiting for the condition May 10 10:01:04.997014 kubelet[2860]: E0510 10:01:04.997009 2860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aeb43068-2da6-429a-8c3a-56b66c682398-config-volume podName:aeb43068-2da6-429a-8c3a-56b66c682398 nodeName:}" failed. No retries permitted until 2025-05-10 10:01:05.496989053 +0000 UTC m=+34.505869862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aeb43068-2da6-429a-8c3a-56b66c682398-config-volume") pod "coredns-7db6d8ff4d-6kltv" (UID: "aeb43068-2da6-429a-8c3a-56b66c682398") : failed to sync configmap cache: timed out waiting for the condition May 10 10:01:05.602572 containerd[1491]: time="2025-05-10T10:01:05.602484557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6kltv,Uid:aeb43068-2da6-429a-8c3a-56b66c682398,Namespace:kube-system,Attempt:0,}" May 10 10:01:05.611508 containerd[1491]: time="2025-05-10T10:01:05.611458867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wvc5q,Uid:bfdfdf31-512f-447f-bc05-2dbea8a003f7,Namespace:kube-system,Attempt:0,}" May 10 10:01:05.938351 systemd-networkd[1419]: cilium_host: Link UP May 10 10:01:05.939423 systemd-networkd[1419]: cilium_net: Link UP May 10 10:01:05.940111 systemd-networkd[1419]: cilium_net: Gained carrier May 10 10:01:05.940286 systemd-networkd[1419]: cilium_host: Gained carrier May 10 10:01:06.029391 systemd-networkd[1419]: cilium_vxlan: Link UP May 10 10:01:06.029695 systemd-networkd[1419]: cilium_vxlan: Gained carrier May 10 10:01:06.368514 kernel: NET: Registered PF_ALG protocol family May 10 10:01:06.415518 systemd-networkd[1419]: cilium_net: Gained IPv6LL May 10 10:01:06.791515 systemd-networkd[1419]: cilium_host: Gained IPv6LL May 10 10:01:07.297722 systemd-networkd[1419]: lxc_health: Link UP May 10 10:01:07.299052 systemd-networkd[1419]: lxc_health: Gained carrier May 10 10:01:07.432506 systemd-networkd[1419]: cilium_vxlan: Gained IPv6LL May 10 10:01:07.962830 kernel: eth0: renamed from tmpdec72 May 10 10:01:07.972396 kernel: eth0: renamed from tmpdeb9a May 10 10:01:07.982303 systemd-networkd[1419]: lxce77152110fb5: Link UP May 10 10:01:07.987116 systemd-networkd[1419]: lxca3847a10f4db: Link UP May 10 10:01:07.989312 systemd-networkd[1419]: lxca3847a10f4db: Gained carrier May 10 10:01:07.999770 systemd-networkd[1419]: lxce77152110fb5: Gained carrier May 10 10:01:08.647530 systemd-networkd[1419]: lxc_health: Gained IPv6LL May 10 10:01:09.671784 systemd-networkd[1419]: lxca3847a10f4db: Gained IPv6LL May 10 10:01:09.991675 systemd-networkd[1419]: lxce77152110fb5: Gained IPv6LL May 10 10:01:12.894856 containerd[1491]: time="2025-05-10T10:01:12.894641837Z" level=info msg="connecting to shim dec72256aa53045ae2b6236dee155d7313f64263dfd6f2a9f698cb9acf5c1031" address="unix:///run/containerd/s/3d6d970cd0fb3271819c11217a36b5b8b86dd1df335d22c4e619596ee330f22d" namespace=k8s.io protocol=ttrpc version=3 May 10 10:01:12.956872 systemd[1]: Started cri-containerd-dec72256aa53045ae2b6236dee155d7313f64263dfd6f2a9f698cb9acf5c1031.scope - libcontainer container dec72256aa53045ae2b6236dee155d7313f64263dfd6f2a9f698cb9acf5c1031. May 10 10:01:12.970123 containerd[1491]: time="2025-05-10T10:01:12.970025905Z" level=info msg="connecting to shim deb9a3c307a2623373dbfbef226d243cfd02a51934d7d444a5af79568c740b1e" address="unix:///run/containerd/s/8a9afb92b837cc2477e5b5a04c6dc242dd342f60de78636d93b47789bce51ab1" namespace=k8s.io protocol=ttrpc version=3 May 10 10:01:13.010503 systemd[1]: Started cri-containerd-deb9a3c307a2623373dbfbef226d243cfd02a51934d7d444a5af79568c740b1e.scope - libcontainer container deb9a3c307a2623373dbfbef226d243cfd02a51934d7d444a5af79568c740b1e. May 10 10:01:13.039753 containerd[1491]: time="2025-05-10T10:01:13.039720302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wvc5q,Uid:bfdfdf31-512f-447f-bc05-2dbea8a003f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"dec72256aa53045ae2b6236dee155d7313f64263dfd6f2a9f698cb9acf5c1031\"" May 10 10:01:13.044073 containerd[1491]: time="2025-05-10T10:01:13.044036372Z" level=info msg="CreateContainer within sandbox \"dec72256aa53045ae2b6236dee155d7313f64263dfd6f2a9f698cb9acf5c1031\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 10:01:13.057988 containerd[1491]: time="2025-05-10T10:01:13.057951372Z" level=info msg="Container 3d040acf0ad20bc72b7892abd9baa10dfa655406c267f247855d52022aa32c9b: CDI devices from CRI Config.CDIDevices: []" May 10 10:01:13.072298 containerd[1491]: time="2025-05-10T10:01:13.072190161Z" level=info msg="CreateContainer within sandbox \"dec72256aa53045ae2b6236dee155d7313f64263dfd6f2a9f698cb9acf5c1031\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d040acf0ad20bc72b7892abd9baa10dfa655406c267f247855d52022aa32c9b\"" May 10 10:01:13.073459 containerd[1491]: time="2025-05-10T10:01:13.072783367Z" level=info msg="StartContainer for \"3d040acf0ad20bc72b7892abd9baa10dfa655406c267f247855d52022aa32c9b\"" May 10 10:01:13.074488 containerd[1491]: time="2025-05-10T10:01:13.074339505Z" level=info msg="connecting to shim 3d040acf0ad20bc72b7892abd9baa10dfa655406c267f247855d52022aa32c9b" address="unix:///run/containerd/s/3d6d970cd0fb3271819c11217a36b5b8b86dd1df335d22c4e619596ee330f22d" protocol=ttrpc version=3 May 10 10:01:13.084780 containerd[1491]: time="2025-05-10T10:01:13.084745724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6kltv,Uid:aeb43068-2da6-429a-8c3a-56b66c682398,Namespace:kube-system,Attempt:0,} returns sandbox id \"deb9a3c307a2623373dbfbef226d243cfd02a51934d7d444a5af79568c740b1e\"" May 10 10:01:13.090779 containerd[1491]: time="2025-05-10T10:01:13.090621709Z" level=info msg="CreateContainer within sandbox \"deb9a3c307a2623373dbfbef226d243cfd02a51934d7d444a5af79568c740b1e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 10:01:13.105261 containerd[1491]: time="2025-05-10T10:01:13.105215586Z" level=info msg="Container 592cedeb04d88156ca2ec88e58d19ac3366b025a8b3e4104b86cb47f8de2ccea: CDI devices from CRI Config.CDIDevices: []" May 10 10:01:13.106512 systemd[1]: Started cri-containerd-3d040acf0ad20bc72b7892abd9baa10dfa655406c267f247855d52022aa32c9b.scope - libcontainer container 3d040acf0ad20bc72b7892abd9baa10dfa655406c267f247855d52022aa32c9b. May 10 10:01:13.118667 containerd[1491]: time="2025-05-10T10:01:13.118599297Z" level=info msg="CreateContainer within sandbox \"deb9a3c307a2623373dbfbef226d243cfd02a51934d7d444a5af79568c740b1e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"592cedeb04d88156ca2ec88e58d19ac3366b025a8b3e4104b86cb47f8de2ccea\"" May 10 10:01:13.120349 containerd[1491]: time="2025-05-10T10:01:13.120317659Z" level=info msg="StartContainer for \"592cedeb04d88156ca2ec88e58d19ac3366b025a8b3e4104b86cb47f8de2ccea\"" May 10 10:01:13.122217 containerd[1491]: time="2025-05-10T10:01:13.122133234Z" level=info msg="connecting to shim 592cedeb04d88156ca2ec88e58d19ac3366b025a8b3e4104b86cb47f8de2ccea" address="unix:///run/containerd/s/8a9afb92b837cc2477e5b5a04c6dc242dd342f60de78636d93b47789bce51ab1" protocol=ttrpc version=3 May 10 10:01:13.152556 systemd[1]: Started cri-containerd-592cedeb04d88156ca2ec88e58d19ac3366b025a8b3e4104b86cb47f8de2ccea.scope - libcontainer container 592cedeb04d88156ca2ec88e58d19ac3366b025a8b3e4104b86cb47f8de2ccea. May 10 10:01:13.160289 containerd[1491]: time="2025-05-10T10:01:13.160253776Z" level=info msg="StartContainer for \"3d040acf0ad20bc72b7892abd9baa10dfa655406c267f247855d52022aa32c9b\" returns successfully" May 10 10:01:13.196733 containerd[1491]: time="2025-05-10T10:01:13.196698043Z" level=info msg="StartContainer for \"592cedeb04d88156ca2ec88e58d19ac3366b025a8b3e4104b86cb47f8de2ccea\" returns successfully" May 10 10:01:13.646174 kubelet[2860]: I0510 10:01:13.645607 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6kltv" podStartSLOduration=26.645584905 podStartE2EDuration="26.645584905s" podCreationTimestamp="2025-05-10 10:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:01:13.609983354 +0000 UTC m=+42.618864163" watchObservedRunningTime="2025-05-10 10:01:13.645584905 +0000 UTC m=+42.654465684" May 10 10:03:53.040586 systemd[1]: Started sshd@9-172.24.4.22:22-172.24.4.1:37638.service - OpenSSH per-connection server daemon (172.24.4.1:37638). May 10 10:03:54.358470 sshd[4177]: Accepted publickey for core from 172.24.4.1 port 37638 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:03:54.363116 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:03:54.381851 systemd-logind[1468]: New session 12 of user core. May 10 10:03:54.388781 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 10:03:55.230972 sshd[4179]: Connection closed by 172.24.4.1 port 37638 May 10 10:03:55.231795 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 10 10:03:55.237093 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. May 10 10:03:55.239354 systemd[1]: sshd@9-172.24.4.22:22-172.24.4.1:37638.service: Deactivated successfully. May 10 10:03:55.246145 systemd[1]: session-12.scope: Deactivated successfully. May 10 10:03:55.248280 systemd-logind[1468]: Removed session 12. May 10 10:04:00.255792 systemd[1]: Started sshd@10-172.24.4.22:22-172.24.4.1:34770.service - OpenSSH per-connection server daemon (172.24.4.1:34770). May 10 10:04:01.450149 sshd[4193]: Accepted publickey for core from 172.24.4.1 port 34770 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:01.454726 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:01.469312 systemd-logind[1468]: New session 13 of user core. May 10 10:04:01.485962 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 10:04:02.235504 sshd[4195]: Connection closed by 172.24.4.1 port 34770 May 10 10:04:02.235254 sshd-session[4193]: pam_unix(sshd:session): session closed for user core May 10 10:04:02.244555 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. May 10 10:04:02.244713 systemd[1]: sshd@10-172.24.4.22:22-172.24.4.1:34770.service: Deactivated successfully. May 10 10:04:02.252780 systemd[1]: session-13.scope: Deactivated successfully. May 10 10:04:02.258964 systemd-logind[1468]: Removed session 13. May 10 10:04:07.256426 systemd[1]: Started sshd@11-172.24.4.22:22-172.24.4.1:36674.service - OpenSSH per-connection server daemon (172.24.4.1:36674). May 10 10:04:08.412894 sshd[4208]: Accepted publickey for core from 172.24.4.1 port 36674 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:08.416569 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:08.429501 systemd-logind[1468]: New session 14 of user core. May 10 10:04:08.436757 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 10:04:09.240734 sshd[4210]: Connection closed by 172.24.4.1 port 36674 May 10 10:04:09.242155 sshd-session[4208]: pam_unix(sshd:session): session closed for user core May 10 10:04:09.248625 systemd[1]: sshd@11-172.24.4.22:22-172.24.4.1:36674.service: Deactivated successfully. May 10 10:04:09.251564 systemd[1]: session-14.scope: Deactivated successfully. May 10 10:04:09.252767 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. May 10 10:04:09.254913 systemd-logind[1468]: Removed session 14. May 10 10:04:14.277211 systemd[1]: Started sshd@12-172.24.4.22:22-172.24.4.1:51380.service - OpenSSH per-connection server daemon (172.24.4.1:51380). May 10 10:04:15.511334 sshd[4223]: Accepted publickey for core from 172.24.4.1 port 51380 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:15.514888 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:15.538573 systemd-logind[1468]: New session 15 of user core. May 10 10:04:15.543828 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 10:04:16.426664 sshd[4225]: Connection closed by 172.24.4.1 port 51380 May 10 10:04:16.427328 sshd-session[4223]: pam_unix(sshd:session): session closed for user core May 10 10:04:16.449993 systemd[1]: sshd@12-172.24.4.22:22-172.24.4.1:51380.service: Deactivated successfully. May 10 10:04:16.458096 systemd[1]: session-15.scope: Deactivated successfully. May 10 10:04:16.463960 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. May 10 10:04:16.473859 systemd[1]: Started sshd@13-172.24.4.22:22-172.24.4.1:51382.service - OpenSSH per-connection server daemon (172.24.4.1:51382). May 10 10:04:16.483136 systemd-logind[1468]: Removed session 15. May 10 10:04:17.820152 sshd[4237]: Accepted publickey for core from 172.24.4.1 port 51382 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:17.821607 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:17.836494 systemd-logind[1468]: New session 16 of user core. May 10 10:04:17.848169 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 10:04:18.741947 sshd[4240]: Connection closed by 172.24.4.1 port 51382 May 10 10:04:18.741581 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 10 10:04:18.767832 systemd[1]: sshd@13-172.24.4.22:22-172.24.4.1:51382.service: Deactivated successfully. May 10 10:04:18.773995 systemd[1]: session-16.scope: Deactivated successfully. May 10 10:04:18.777134 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. May 10 10:04:18.784351 systemd[1]: Started sshd@14-172.24.4.22:22-172.24.4.1:51398.service - OpenSSH per-connection server daemon (172.24.4.1:51398). May 10 10:04:18.788544 systemd-logind[1468]: Removed session 16. May 10 10:04:20.273735 sshd[4249]: Accepted publickey for core from 172.24.4.1 port 51398 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:20.277117 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:20.291515 systemd-logind[1468]: New session 17 of user core. May 10 10:04:20.300741 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 10:04:21.106914 sshd[4254]: Connection closed by 172.24.4.1 port 51398 May 10 10:04:21.107649 sshd-session[4249]: pam_unix(sshd:session): session closed for user core May 10 10:04:21.113904 systemd[1]: sshd@14-172.24.4.22:22-172.24.4.1:51398.service: Deactivated successfully. May 10 10:04:21.116737 systemd[1]: session-17.scope: Deactivated successfully. May 10 10:04:21.119455 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. May 10 10:04:21.121749 systemd-logind[1468]: Removed session 17. May 10 10:04:26.131304 systemd[1]: Started sshd@15-172.24.4.22:22-172.24.4.1:43540.service - OpenSSH per-connection server daemon (172.24.4.1:43540). May 10 10:04:27.378107 sshd[4267]: Accepted publickey for core from 172.24.4.1 port 43540 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:27.387960 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:27.418249 systemd-logind[1468]: New session 18 of user core. May 10 10:04:27.425209 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 10:04:28.163783 sshd[4272]: Connection closed by 172.24.4.1 port 43540 May 10 10:04:28.165023 sshd-session[4267]: pam_unix(sshd:session): session closed for user core May 10 10:04:28.172729 systemd[1]: sshd@15-172.24.4.22:22-172.24.4.1:43540.service: Deactivated successfully. May 10 10:04:28.177876 systemd[1]: session-18.scope: Deactivated successfully. May 10 10:04:28.182110 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. May 10 10:04:28.186523 systemd-logind[1468]: Removed session 18. May 10 10:04:33.196206 systemd[1]: Started sshd@16-172.24.4.22:22-172.24.4.1:43552.service - OpenSSH per-connection server daemon (172.24.4.1:43552). May 10 10:04:34.488550 sshd[4286]: Accepted publickey for core from 172.24.4.1 port 43552 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:34.492890 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:34.511784 systemd-logind[1468]: New session 19 of user core. May 10 10:04:34.517125 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 10:04:35.361570 sshd[4288]: Connection closed by 172.24.4.1 port 43552 May 10 10:04:35.364980 sshd-session[4286]: pam_unix(sshd:session): session closed for user core May 10 10:04:35.389318 systemd[1]: sshd@16-172.24.4.22:22-172.24.4.1:43552.service: Deactivated successfully. May 10 10:04:35.396298 systemd[1]: session-19.scope: Deactivated successfully. May 10 10:04:35.400116 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. May 10 10:04:35.408872 systemd[1]: Started sshd@17-172.24.4.22:22-172.24.4.1:44526.service - OpenSSH per-connection server daemon (172.24.4.1:44526). May 10 10:04:35.414523 systemd-logind[1468]: Removed session 19. May 10 10:04:36.765923 sshd[4300]: Accepted publickey for core from 172.24.4.1 port 44526 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:36.769189 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:36.783757 systemd-logind[1468]: New session 20 of user core. May 10 10:04:36.790718 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 10:04:37.741605 sshd[4303]: Connection closed by 172.24.4.1 port 44526 May 10 10:04:37.742354 sshd-session[4300]: pam_unix(sshd:session): session closed for user core May 10 10:04:37.759480 systemd[1]: sshd@17-172.24.4.22:22-172.24.4.1:44526.service: Deactivated successfully. May 10 10:04:37.764938 systemd[1]: session-20.scope: Deactivated successfully. May 10 10:04:37.770007 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. May 10 10:04:37.774983 systemd[1]: Started sshd@18-172.24.4.22:22-172.24.4.1:44528.service - OpenSSH per-connection server daemon (172.24.4.1:44528). May 10 10:04:37.781054 systemd-logind[1468]: Removed session 20. May 10 10:04:39.395634 sshd[4313]: Accepted publickey for core from 172.24.4.1 port 44528 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:39.398837 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:39.412172 systemd-logind[1468]: New session 21 of user core. May 10 10:04:39.422726 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 10:04:42.842669 sshd[4316]: Connection closed by 172.24.4.1 port 44528 May 10 10:04:42.844167 sshd-session[4313]: pam_unix(sshd:session): session closed for user core May 10 10:04:42.876548 systemd[1]: sshd@18-172.24.4.22:22-172.24.4.1:44528.service: Deactivated successfully. May 10 10:04:42.885997 systemd[1]: session-21.scope: Deactivated successfully. May 10 10:04:42.893940 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. May 10 10:04:42.902072 systemd[1]: Started sshd@19-172.24.4.22:22-172.24.4.1:44534.service - OpenSSH per-connection server daemon (172.24.4.1:44534). May 10 10:04:42.910897 systemd-logind[1468]: Removed session 21. May 10 10:04:44.160836 sshd[4332]: Accepted publickey for core from 172.24.4.1 port 44534 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:44.165266 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:44.185484 systemd-logind[1468]: New session 22 of user core. May 10 10:04:44.192709 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 10:04:45.186530 sshd[4335]: Connection closed by 172.24.4.1 port 44534 May 10 10:04:45.188749 sshd-session[4332]: pam_unix(sshd:session): session closed for user core May 10 10:04:45.214015 systemd[1]: sshd@19-172.24.4.22:22-172.24.4.1:44534.service: Deactivated successfully. May 10 10:04:45.221946 systemd[1]: session-22.scope: Deactivated successfully. May 10 10:04:45.225445 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. May 10 10:04:45.233235 systemd[1]: Started sshd@20-172.24.4.22:22-172.24.4.1:43496.service - OpenSSH per-connection server daemon (172.24.4.1:43496). May 10 10:04:45.238599 systemd-logind[1468]: Removed session 22. May 10 10:04:46.852444 sshd[4343]: Accepted publickey for core from 172.24.4.1 port 43496 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:46.856118 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:46.871725 systemd-logind[1468]: New session 23 of user core. May 10 10:04:46.880735 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 10:04:47.667450 sshd[4346]: Connection closed by 172.24.4.1 port 43496 May 10 10:04:47.667092 sshd-session[4343]: pam_unix(sshd:session): session closed for user core May 10 10:04:47.676718 systemd[1]: sshd@20-172.24.4.22:22-172.24.4.1:43496.service: Deactivated successfully. May 10 10:04:47.684180 systemd[1]: session-23.scope: Deactivated successfully. May 10 10:04:47.691833 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. May 10 10:04:47.695306 systemd-logind[1468]: Removed session 23. May 10 10:04:52.697012 systemd[1]: Started sshd@21-172.24.4.22:22-172.24.4.1:43500.service - OpenSSH per-connection server daemon (172.24.4.1:43500). May 10 10:04:53.672193 sshd[4363]: Accepted publickey for core from 172.24.4.1 port 43500 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:04:53.673579 sshd-session[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:04:53.687499 systemd-logind[1468]: New session 24 of user core. May 10 10:04:53.692722 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 10:04:54.583490 sshd[4365]: Connection closed by 172.24.4.1 port 43500 May 10 10:04:54.584911 sshd-session[4363]: pam_unix(sshd:session): session closed for user core May 10 10:04:54.594830 systemd[1]: sshd@21-172.24.4.22:22-172.24.4.1:43500.service: Deactivated successfully. May 10 10:04:54.601704 systemd[1]: session-24.scope: Deactivated successfully. May 10 10:04:54.606207 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. May 10 10:04:54.609264 systemd-logind[1468]: Removed session 24. May 10 10:04:59.626114 systemd[1]: Started sshd@22-172.24.4.22:22-172.24.4.1:43234.service - OpenSSH per-connection server daemon (172.24.4.1:43234). May 10 10:05:00.690556 sshd[4377]: Accepted publickey for core from 172.24.4.1 port 43234 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:05:00.696195 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:05:00.710991 systemd-logind[1468]: New session 25 of user core. May 10 10:05:00.727932 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 10:05:01.448702 sshd[4379]: Connection closed by 172.24.4.1 port 43234 May 10 10:05:01.449892 sshd-session[4377]: pam_unix(sshd:session): session closed for user core May 10 10:05:01.458325 systemd[1]: sshd@22-172.24.4.22:22-172.24.4.1:43234.service: Deactivated successfully. May 10 10:05:01.463546 systemd[1]: session-25.scope: Deactivated successfully. May 10 10:05:01.466252 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. May 10 10:05:01.469660 systemd-logind[1468]: Removed session 25. May 10 10:05:06.475270 systemd[1]: Started sshd@23-172.24.4.22:22-172.24.4.1:43592.service - OpenSSH per-connection server daemon (172.24.4.1:43592). May 10 10:05:07.584205 sshd[4391]: Accepted publickey for core from 172.24.4.1 port 43592 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:05:07.590234 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:05:07.603900 systemd-logind[1468]: New session 26 of user core. May 10 10:05:07.619752 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 10:05:08.431427 sshd[4393]: Connection closed by 172.24.4.1 port 43592 May 10 10:05:08.432944 sshd-session[4391]: pam_unix(sshd:session): session closed for user core May 10 10:05:08.460773 systemd[1]: sshd@23-172.24.4.22:22-172.24.4.1:43592.service: Deactivated successfully. May 10 10:05:08.470005 systemd[1]: session-26.scope: Deactivated successfully. May 10 10:05:08.473644 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. May 10 10:05:08.482527 systemd[1]: Started sshd@24-172.24.4.22:22-172.24.4.1:43604.service - OpenSSH per-connection server daemon (172.24.4.1:43604). May 10 10:05:08.486291 systemd-logind[1468]: Removed session 26. May 10 10:05:09.698485 sshd[4404]: Accepted publickey for core from 172.24.4.1 port 43604 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:05:09.701521 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:05:09.717750 systemd-logind[1468]: New session 27 of user core. May 10 10:05:09.733733 systemd[1]: Started session-27.scope - Session 27 of User core. May 10 10:05:12.095989 kubelet[2860]: I0510 10:05:12.095608 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wvc5q" podStartSLOduration=265.095480971 podStartE2EDuration="4m25.095480971s" podCreationTimestamp="2025-05-10 10:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:01:13.6873752 +0000 UTC m=+42.696255959" watchObservedRunningTime="2025-05-10 10:05:12.095480971 +0000 UTC m=+281.104361770" May 10 10:05:12.122024 containerd[1491]: time="2025-05-10T10:05:12.121905288Z" level=info msg="StopContainer for \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" with timeout 30 (s)" May 10 10:05:12.124099 containerd[1491]: time="2025-05-10T10:05:12.123898681Z" level=info msg="Stop container \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" with signal terminated" May 10 10:05:12.154040 systemd[1]: cri-containerd-4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53.scope: Deactivated successfully. May 10 10:05:12.154800 systemd[1]: cri-containerd-4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53.scope: Consumed 1.032s CPU time, 28.1M memory peak, 4K written to disk. May 10 10:05:12.167437 containerd[1491]: time="2025-05-10T10:05:12.167375981Z" level=info msg="received exit event container_id:\"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" id:\"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" pid:3413 exited_at:{seconds:1746871512 nanos:161126786}" May 10 10:05:12.168144 containerd[1491]: time="2025-05-10T10:05:12.167967440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" id:\"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" pid:3413 exited_at:{seconds:1746871512 nanos:161126786}" May 10 10:05:12.184577 containerd[1491]: time="2025-05-10T10:05:12.184400391Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 10:05:12.195412 containerd[1491]: time="2025-05-10T10:05:12.195195261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" id:\"8390a51c37303a3c4ec78bdf38a73f4e071b4839edee98bbdec2c878eff0518c\" pid:4433 exited_at:{seconds:1746871512 nanos:191844114}" May 10 10:05:12.198270 containerd[1491]: time="2025-05-10T10:05:12.198226660Z" level=info msg="StopContainer for \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" with timeout 2 (s)" May 10 10:05:12.198879 containerd[1491]: time="2025-05-10T10:05:12.198760419Z" level=info msg="Stop container \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" with signal terminated" May 10 10:05:12.214704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53-rootfs.mount: Deactivated successfully. May 10 10:05:12.230228 systemd-networkd[1419]: lxc_health: Link DOWN May 10 10:05:12.230240 systemd-networkd[1419]: lxc_health: Lost carrier May 10 10:05:12.247956 systemd[1]: cri-containerd-70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a.scope: Deactivated successfully. May 10 10:05:12.248379 systemd[1]: cri-containerd-70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a.scope: Consumed 9.880s CPU time, 124.4M memory peak, 136K read from disk, 13.3M written to disk. May 10 10:05:12.255724 containerd[1491]: time="2025-05-10T10:05:12.255563590Z" level=info msg="StopContainer for \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" returns successfully" May 10 10:05:12.255724 containerd[1491]: time="2025-05-10T10:05:12.256210542Z" level=info msg="received exit event container_id:\"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" id:\"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" pid:3484 exited_at:{seconds:1746871512 nanos:251315323}" May 10 10:05:12.255724 containerd[1491]: time="2025-05-10T10:05:12.256344192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" id:\"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" pid:3484 exited_at:{seconds:1746871512 nanos:251315323}" May 10 10:05:12.259684 containerd[1491]: time="2025-05-10T10:05:12.259209149Z" level=info msg="StopPodSandbox for \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\"" May 10 10:05:12.259684 containerd[1491]: time="2025-05-10T10:05:12.259325407Z" level=info msg="Container to stop \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:05:12.277076 systemd[1]: cri-containerd-b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e.scope: Deactivated successfully. May 10 10:05:12.283770 containerd[1491]: time="2025-05-10T10:05:12.283455977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" id:\"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" pid:3040 exit_status:137 exited_at:{seconds:1746871512 nanos:282932467}" May 10 10:05:12.313196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a-rootfs.mount: Deactivated successfully. May 10 10:05:12.335829 containerd[1491]: time="2025-05-10T10:05:12.335772303Z" level=info msg="StopContainer for \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" returns successfully" May 10 10:05:12.336481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e-rootfs.mount: Deactivated successfully. May 10 10:05:12.344538 containerd[1491]: time="2025-05-10T10:05:12.337667113Z" level=info msg="StopPodSandbox for \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\"" May 10 10:05:12.344538 containerd[1491]: time="2025-05-10T10:05:12.337936367Z" level=info msg="Container to stop \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:05:12.344538 containerd[1491]: time="2025-05-10T10:05:12.338019583Z" level=info msg="Container to stop \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:05:12.344538 containerd[1491]: time="2025-05-10T10:05:12.338052134Z" level=info msg="Container to stop \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:05:12.344538 containerd[1491]: time="2025-05-10T10:05:12.338065148Z" level=info msg="Container to stop \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:05:12.344538 containerd[1491]: time="2025-05-10T10:05:12.338077031Z" level=info msg="Container to stop \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 10:05:12.348437 systemd[1]: cri-containerd-b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2.scope: Deactivated successfully. May 10 10:05:12.360786 containerd[1491]: time="2025-05-10T10:05:12.360715025Z" level=info msg="shim disconnected" id=b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e namespace=k8s.io May 10 10:05:12.361496 containerd[1491]: time="2025-05-10T10:05:12.360877379Z" level=warning msg="cleaning up after shim disconnected" id=b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e namespace=k8s.io May 10 10:05:12.361496 containerd[1491]: time="2025-05-10T10:05:12.360902005Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 10:05:12.391865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2-rootfs.mount: Deactivated successfully. May 10 10:05:12.395245 containerd[1491]: time="2025-05-10T10:05:12.394683564Z" level=info msg="received exit event sandbox_id:\"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" exit_status:137 exited_at:{seconds:1746871512 nanos:282932467}" May 10 10:05:12.395730 containerd[1491]: time="2025-05-10T10:05:12.395665583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" id:\"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" pid:3033 exit_status:137 exited_at:{seconds:1746871512 nanos:351215231}" May 10 10:05:12.397100 containerd[1491]: time="2025-05-10T10:05:12.397059133Z" level=info msg="TearDown network for sandbox \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" successfully" May 10 10:05:12.397100 containerd[1491]: time="2025-05-10T10:05:12.397087386Z" level=info msg="StopPodSandbox for \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" returns successfully" May 10 10:05:12.399192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e-shm.mount: Deactivated successfully. May 10 10:05:12.415882 containerd[1491]: time="2025-05-10T10:05:12.415824173Z" level=info msg="received exit event sandbox_id:\"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" exit_status:137 exited_at:{seconds:1746871512 nanos:351215231}" May 10 10:05:12.416221 containerd[1491]: time="2025-05-10T10:05:12.416188306Z" level=info msg="shim disconnected" id=b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2 namespace=k8s.io May 10 10:05:12.416332 containerd[1491]: time="2025-05-10T10:05:12.416312207Z" level=warning msg="cleaning up after shim disconnected" id=b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2 namespace=k8s.io May 10 10:05:12.416672 containerd[1491]: time="2025-05-10T10:05:12.416629863Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 10:05:12.420636 containerd[1491]: time="2025-05-10T10:05:12.420537822Z" level=info msg="TearDown network for sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" successfully" May 10 10:05:12.420732 containerd[1491]: time="2025-05-10T10:05:12.420653830Z" level=info msg="StopPodSandbox for \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" returns successfully" May 10 10:05:12.480450 kubelet[2860]: I0510 10:05:12.480384 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/418f35a5-cb54-4456-8d05-6f4a23bbb187-cilium-config-path\") pod \"418f35a5-cb54-4456-8d05-6f4a23bbb187\" (UID: \"418f35a5-cb54-4456-8d05-6f4a23bbb187\") " May 10 10:05:12.480450 kubelet[2860]: I0510 10:05:12.480463 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwh8d\" (UniqueName: \"kubernetes.io/projected/418f35a5-cb54-4456-8d05-6f4a23bbb187-kube-api-access-lwh8d\") pod \"418f35a5-cb54-4456-8d05-6f4a23bbb187\" (UID: \"418f35a5-cb54-4456-8d05-6f4a23bbb187\") " May 10 10:05:12.487560 kubelet[2860]: I0510 10:05:12.486859 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/418f35a5-cb54-4456-8d05-6f4a23bbb187-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "418f35a5-cb54-4456-8d05-6f4a23bbb187" (UID: "418f35a5-cb54-4456-8d05-6f4a23bbb187"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 10:05:12.492835 kubelet[2860]: I0510 10:05:12.492250 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418f35a5-cb54-4456-8d05-6f4a23bbb187-kube-api-access-lwh8d" (OuterVolumeSpecName: "kube-api-access-lwh8d") pod "418f35a5-cb54-4456-8d05-6f4a23bbb187" (UID: "418f35a5-cb54-4456-8d05-6f4a23bbb187"). InnerVolumeSpecName "kube-api-access-lwh8d". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 10:05:12.581621 kubelet[2860]: I0510 10:05:12.581560 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-hostproc\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.581621 kubelet[2860]: I0510 10:05:12.581623 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7519c910-415f-4090-b08a-5ae230482243-cilium-config-path\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.581621 kubelet[2860]: I0510 10:05:12.581645 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-bpf-maps\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.581902 kubelet[2860]: I0510 10:05:12.581667 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7519c910-415f-4090-b08a-5ae230482243-hubble-tls\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.581902 kubelet[2860]: I0510 10:05:12.581686 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-xtables-lock\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.581902 kubelet[2860]: I0510 10:05:12.581707 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cilium-run\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.581902 kubelet[2860]: I0510 10:05:12.581735 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cilium-cgroup\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.581902 kubelet[2860]: I0510 10:05:12.581751 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-etc-cni-netd\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.581902 kubelet[2860]: I0510 10:05:12.581778 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-lib-modules\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.582105 kubelet[2860]: I0510 10:05:12.581794 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cni-path\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.582105 kubelet[2860]: I0510 10:05:12.581816 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-host-proc-sys-net\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.582105 kubelet[2860]: I0510 10:05:12.581843 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7519c910-415f-4090-b08a-5ae230482243-clustermesh-secrets\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.582105 kubelet[2860]: I0510 10:05:12.581870 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-host-proc-sys-kernel\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.582105 kubelet[2860]: I0510 10:05:12.581891 2860 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgg2g\" (UniqueName: \"kubernetes.io/projected/7519c910-415f-4090-b08a-5ae230482243-kube-api-access-pgg2g\") pod \"7519c910-415f-4090-b08a-5ae230482243\" (UID: \"7519c910-415f-4090-b08a-5ae230482243\") " May 10 10:05:12.582105 kubelet[2860]: I0510 10:05:12.581952 2860 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/418f35a5-cb54-4456-8d05-6f4a23bbb187-cilium-config-path\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.582306 kubelet[2860]: I0510 10:05:12.581966 2860 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lwh8d\" (UniqueName: \"kubernetes.io/projected/418f35a5-cb54-4456-8d05-6f4a23bbb187-kube-api-access-lwh8d\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.584389 kubelet[2860]: I0510 10:05:12.582409 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.584389 kubelet[2860]: I0510 10:05:12.582470 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-hostproc" (OuterVolumeSpecName: "hostproc") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.587386 kubelet[2860]: I0510 10:05:12.585589 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.587386 kubelet[2860]: I0510 10:05:12.585656 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.587386 kubelet[2860]: I0510 10:05:12.586170 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.587386 kubelet[2860]: I0510 10:05:12.586215 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cni-path" (OuterVolumeSpecName: "cni-path") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.587386 kubelet[2860]: I0510 10:05:12.586496 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.588074 kubelet[2860]: I0510 10:05:12.588041 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.588154 kubelet[2860]: I0510 10:05:12.588076 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.588325 kubelet[2860]: I0510 10:05:12.588259 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 10:05:12.589335 kubelet[2860]: I0510 10:05:12.589256 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7519c910-415f-4090-b08a-5ae230482243-kube-api-access-pgg2g" (OuterVolumeSpecName: "kube-api-access-pgg2g") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "kube-api-access-pgg2g". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 10:05:12.591385 kubelet[2860]: I0510 10:05:12.590875 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7519c910-415f-4090-b08a-5ae230482243-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 10:05:12.594990 kubelet[2860]: I0510 10:05:12.594950 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7519c910-415f-4090-b08a-5ae230482243-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 10:05:12.600078 kubelet[2860]: I0510 10:05:12.598610 2860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7519c910-415f-4090-b08a-5ae230482243-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7519c910-415f-4090-b08a-5ae230482243" (UID: "7519c910-415f-4090-b08a-5ae230482243"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 10:05:12.636677 kubelet[2860]: I0510 10:05:12.635538 2860 scope.go:117] "RemoveContainer" containerID="4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53" May 10 10:05:12.641219 containerd[1491]: time="2025-05-10T10:05:12.640930532Z" level=info msg="RemoveContainer for \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\"" May 10 10:05:12.642449 systemd[1]: Removed slice kubepods-besteffort-pod418f35a5_cb54_4456_8d05_6f4a23bbb187.slice - libcontainer container kubepods-besteffort-pod418f35a5_cb54_4456_8d05_6f4a23bbb187.slice. May 10 10:05:12.642565 systemd[1]: kubepods-besteffort-pod418f35a5_cb54_4456_8d05_6f4a23bbb187.slice: Consumed 1.067s CPU time, 28.3M memory peak, 4K written to disk. May 10 10:05:12.657528 systemd[1]: Removed slice kubepods-burstable-pod7519c910_415f_4090_b08a_5ae230482243.slice - libcontainer container kubepods-burstable-pod7519c910_415f_4090_b08a_5ae230482243.slice. May 10 10:05:12.657647 systemd[1]: kubepods-burstable-pod7519c910_415f_4090_b08a_5ae230482243.slice: Consumed 9.976s CPU time, 124.9M memory peak, 136K read from disk, 13.3M written to disk. May 10 10:05:12.683148 kubelet[2860]: I0510 10:05:12.682969 2860 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-host-proc-sys-net\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683148 kubelet[2860]: I0510 10:05:12.682999 2860 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7519c910-415f-4090-b08a-5ae230482243-clustermesh-secrets\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683148 kubelet[2860]: I0510 10:05:12.683012 2860 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cni-path\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683148 kubelet[2860]: I0510 10:05:12.683023 2860 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-host-proc-sys-kernel\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683148 kubelet[2860]: I0510 10:05:12.683034 2860 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pgg2g\" (UniqueName: \"kubernetes.io/projected/7519c910-415f-4090-b08a-5ae230482243-kube-api-access-pgg2g\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683148 kubelet[2860]: I0510 10:05:12.683047 2860 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-hostproc\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683148 kubelet[2860]: I0510 10:05:12.683056 2860 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-bpf-maps\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683461 kubelet[2860]: I0510 10:05:12.683066 2860 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7519c910-415f-4090-b08a-5ae230482243-hubble-tls\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683461 kubelet[2860]: I0510 10:05:12.683075 2860 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7519c910-415f-4090-b08a-5ae230482243-cilium-config-path\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683461 kubelet[2860]: I0510 10:05:12.683087 2860 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-xtables-lock\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683461 kubelet[2860]: I0510 10:05:12.683097 2860 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cilium-run\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683461 kubelet[2860]: I0510 10:05:12.683109 2860 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-cilium-cgroup\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683461 kubelet[2860]: I0510 10:05:12.683120 2860 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-etc-cni-netd\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.683461 kubelet[2860]: I0510 10:05:12.683130 2860 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7519c910-415f-4090-b08a-5ae230482243-lib-modules\") on node \"ci-4330-0-0-n-cc41d9e3f6.novalocal\" DevicePath \"\"" May 10 10:05:12.753682 containerd[1491]: time="2025-05-10T10:05:12.753580523Z" level=info msg="RemoveContainer for \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" returns successfully" May 10 10:05:12.755793 kubelet[2860]: I0510 10:05:12.754694 2860 scope.go:117] "RemoveContainer" containerID="4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53" May 10 10:05:12.755988 containerd[1491]: time="2025-05-10T10:05:12.755507743Z" level=error msg="ContainerStatus for \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\": not found" May 10 10:05:12.756075 kubelet[2860]: E0510 10:05:12.755919 2860 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\": not found" containerID="4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53" May 10 10:05:12.756075 kubelet[2860]: I0510 10:05:12.755972 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53"} err="failed to get container status \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53\": not found" May 10 10:05:12.756075 kubelet[2860]: I0510 10:05:12.756059 2860 scope.go:117] "RemoveContainer" containerID="70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a" May 10 10:05:12.759739 containerd[1491]: time="2025-05-10T10:05:12.758946043Z" level=info msg="RemoveContainer for \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\"" May 10 10:05:12.767185 containerd[1491]: time="2025-05-10T10:05:12.767101530Z" level=info msg="RemoveContainer for \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" returns successfully" May 10 10:05:12.769065 kubelet[2860]: I0510 10:05:12.767709 2860 scope.go:117] "RemoveContainer" containerID="02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857" May 10 10:05:12.773188 containerd[1491]: time="2025-05-10T10:05:12.771271671Z" level=info msg="RemoveContainer for \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\"" May 10 10:05:12.779522 containerd[1491]: time="2025-05-10T10:05:12.779424252Z" level=info msg="RemoveContainer for \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\" returns successfully" May 10 10:05:12.779867 kubelet[2860]: I0510 10:05:12.779671 2860 scope.go:117] "RemoveContainer" containerID="bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098" May 10 10:05:12.783238 containerd[1491]: time="2025-05-10T10:05:12.783060925Z" level=info msg="RemoveContainer for \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\"" May 10 10:05:12.813126 containerd[1491]: time="2025-05-10T10:05:12.812878047Z" level=info msg="RemoveContainer for \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\" returns successfully" May 10 10:05:12.815204 kubelet[2860]: I0510 10:05:12.815162 2860 scope.go:117] "RemoveContainer" containerID="3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6" May 10 10:05:12.821432 containerd[1491]: time="2025-05-10T10:05:12.821238687Z" level=info msg="RemoveContainer for \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\"" May 10 10:05:12.830099 containerd[1491]: time="2025-05-10T10:05:12.829990511Z" level=info msg="RemoveContainer for \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\" returns successfully" May 10 10:05:12.831014 kubelet[2860]: I0510 10:05:12.830580 2860 scope.go:117] "RemoveContainer" containerID="b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55" May 10 10:05:12.835451 containerd[1491]: time="2025-05-10T10:05:12.834271259Z" level=info msg="RemoveContainer for \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\"" May 10 10:05:12.842053 containerd[1491]: time="2025-05-10T10:05:12.841988204Z" level=info msg="RemoveContainer for \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\" returns successfully" May 10 10:05:12.842549 kubelet[2860]: I0510 10:05:12.842319 2860 scope.go:117] "RemoveContainer" containerID="70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a" May 10 10:05:12.843478 containerd[1491]: time="2025-05-10T10:05:12.843290504Z" level=error msg="ContainerStatus for \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\": not found" May 10 10:05:12.844136 kubelet[2860]: E0510 10:05:12.843772 2860 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\": not found" containerID="70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a" May 10 10:05:12.844136 kubelet[2860]: I0510 10:05:12.843860 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a"} err="failed to get container status \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\": rpc error: code = NotFound desc = an error occurred when try to find container \"70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a\": not found" May 10 10:05:12.844136 kubelet[2860]: I0510 10:05:12.843921 2860 scope.go:117] "RemoveContainer" containerID="02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857" May 10 10:05:12.845302 containerd[1491]: time="2025-05-10T10:05:12.844798870Z" level=error msg="ContainerStatus for \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\": not found" May 10 10:05:12.845819 kubelet[2860]: E0510 10:05:12.845554 2860 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\": not found" containerID="02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857" May 10 10:05:12.846175 kubelet[2860]: I0510 10:05:12.845653 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857"} err="failed to get container status \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\": rpc error: code = NotFound desc = an error occurred when try to find container \"02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857\": not found" May 10 10:05:12.846175 kubelet[2860]: I0510 10:05:12.846008 2860 scope.go:117] "RemoveContainer" containerID="bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098" May 10 10:05:12.846785 containerd[1491]: time="2025-05-10T10:05:12.846699399Z" level=error msg="ContainerStatus for \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\": not found" May 10 10:05:12.847758 kubelet[2860]: E0510 10:05:12.847424 2860 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\": not found" containerID="bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098" May 10 10:05:12.847758 kubelet[2860]: I0510 10:05:12.847502 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098"} err="failed to get container status \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098\": not found" May 10 10:05:12.847758 kubelet[2860]: I0510 10:05:12.847550 2860 scope.go:117] "RemoveContainer" containerID="3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6" May 10 10:05:12.848059 containerd[1491]: time="2025-05-10T10:05:12.847890069Z" level=error msg="ContainerStatus for \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\": not found" May 10 10:05:12.848666 kubelet[2860]: E0510 10:05:12.848602 2860 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\": not found" containerID="3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6" May 10 10:05:12.849736 kubelet[2860]: I0510 10:05:12.848685 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6"} err="failed to get container status \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6\": not found" May 10 10:05:12.849736 kubelet[2860]: I0510 10:05:12.848737 2860 scope.go:117] "RemoveContainer" containerID="b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55" May 10 10:05:12.849736 kubelet[2860]: E0510 10:05:12.849469 2860 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\": not found" containerID="b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55" May 10 10:05:12.849736 kubelet[2860]: I0510 10:05:12.849523 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55"} err="failed to get container status \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\": rpc error: code = NotFound desc = an error occurred when try to find container \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\": not found" May 10 10:05:12.851979 containerd[1491]: time="2025-05-10T10:05:12.849157964Z" level=error msg="ContainerStatus for \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55\": not found" May 10 10:05:13.216866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2-shm.mount: Deactivated successfully. May 10 10:05:13.217166 systemd[1]: var-lib-kubelet-pods-418f35a5\x2dcb54\x2d4456\x2d8d05\x2d6f4a23bbb187-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlwh8d.mount: Deactivated successfully. May 10 10:05:13.217460 systemd[1]: var-lib-kubelet-pods-7519c910\x2d415f\x2d4090\x2db08a\x2d5ae230482243-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpgg2g.mount: Deactivated successfully. May 10 10:05:13.217883 systemd[1]: var-lib-kubelet-pods-7519c910\x2d415f\x2d4090\x2db08a\x2d5ae230482243-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 10:05:13.218093 systemd[1]: var-lib-kubelet-pods-7519c910\x2d415f\x2d4090\x2db08a\x2d5ae230482243-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 10:05:13.385018 kubelet[2860]: I0510 10:05:13.384926 2860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418f35a5-cb54-4456-8d05-6f4a23bbb187" path="/var/lib/kubelet/pods/418f35a5-cb54-4456-8d05-6f4a23bbb187/volumes" May 10 10:05:13.386283 kubelet[2860]: I0510 10:05:13.386207 2860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7519c910-415f-4090-b08a-5ae230482243" path="/var/lib/kubelet/pods/7519c910-415f-4090-b08a-5ae230482243/volumes" May 10 10:05:14.253246 sshd[4407]: Connection closed by 172.24.4.1 port 43604 May 10 10:05:14.256036 sshd-session[4404]: pam_unix(sshd:session): session closed for user core May 10 10:05:14.276302 systemd[1]: sshd@24-172.24.4.22:22-172.24.4.1:43604.service: Deactivated successfully. May 10 10:05:14.282247 systemd[1]: session-27.scope: Deactivated successfully. May 10 10:05:14.283259 systemd[1]: session-27.scope: Consumed 1.384s CPU time, 25.2M memory peak. May 10 10:05:14.287594 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. May 10 10:05:14.294230 systemd[1]: Started sshd@25-172.24.4.22:22-172.24.4.1:37404.service - OpenSSH per-connection server daemon (172.24.4.1:37404). May 10 10:05:14.299071 systemd-logind[1468]: Removed session 27. May 10 10:05:15.713448 sshd[4555]: Accepted publickey for core from 172.24.4.1 port 37404 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:05:15.716663 sshd-session[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:05:15.736803 systemd-logind[1468]: New session 28 of user core. May 10 10:05:15.742744 systemd[1]: Started session-28.scope - Session 28 of User core. May 10 10:05:16.568776 kubelet[2860]: E0510 10:05:16.568691 2860 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 10:05:17.119764 kubelet[2860]: I0510 10:05:17.119410 2860 topology_manager.go:215] "Topology Admit Handler" podUID="66289a83-7c18-4f27-b8a0-7c660226cd13" podNamespace="kube-system" podName="cilium-2mg26" May 10 10:05:17.119939 kubelet[2860]: E0510 10:05:17.119818 2860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7519c910-415f-4090-b08a-5ae230482243" containerName="apply-sysctl-overwrites" May 10 10:05:17.119939 kubelet[2860]: E0510 10:05:17.119864 2860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7519c910-415f-4090-b08a-5ae230482243" containerName="mount-bpf-fs" May 10 10:05:17.119939 kubelet[2860]: E0510 10:05:17.119880 2860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="418f35a5-cb54-4456-8d05-6f4a23bbb187" containerName="cilium-operator" May 10 10:05:17.119939 kubelet[2860]: E0510 10:05:17.119893 2860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7519c910-415f-4090-b08a-5ae230482243" containerName="mount-cgroup" May 10 10:05:17.119939 kubelet[2860]: E0510 10:05:17.119903 2860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7519c910-415f-4090-b08a-5ae230482243" containerName="clean-cilium-state" May 10 10:05:17.119939 kubelet[2860]: E0510 10:05:17.119913 2860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7519c910-415f-4090-b08a-5ae230482243" containerName="cilium-agent" May 10 10:05:17.120424 kubelet[2860]: I0510 10:05:17.120388 2860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7519c910-415f-4090-b08a-5ae230482243" containerName="cilium-agent" May 10 10:05:17.120424 kubelet[2860]: I0510 10:05:17.120416 2860 memory_manager.go:354] "RemoveStaleState removing state" podUID="418f35a5-cb54-4456-8d05-6f4a23bbb187" containerName="cilium-operator" May 10 10:05:17.132994 systemd[1]: Created slice kubepods-burstable-pod66289a83_7c18_4f27_b8a0_7c660226cd13.slice - libcontainer container kubepods-burstable-pod66289a83_7c18_4f27_b8a0_7c660226cd13.slice. May 10 10:05:17.171473 kubelet[2860]: I0510 10:05:17.170648 2860 setters.go:580] "Node became not ready" node="ci-4330-0-0-n-cc41d9e3f6.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T10:05:17Z","lastTransitionTime":"2025-05-10T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 10:05:17.227338 kubelet[2860]: I0510 10:05:17.227274 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-cni-path\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227338 kubelet[2860]: I0510 10:05:17.227327 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-etc-cni-netd\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227338 kubelet[2860]: I0510 10:05:17.227349 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-hostproc\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227585 kubelet[2860]: I0510 10:05:17.227400 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-cilium-cgroup\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227585 kubelet[2860]: I0510 10:05:17.227424 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-bpf-maps\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227585 kubelet[2860]: I0510 10:05:17.227454 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-lib-modules\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227585 kubelet[2860]: I0510 10:05:17.227478 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-host-proc-sys-net\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227585 kubelet[2860]: I0510 10:05:17.227506 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j4xm\" (UniqueName: \"kubernetes.io/projected/66289a83-7c18-4f27-b8a0-7c660226cd13-kube-api-access-5j4xm\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227585 kubelet[2860]: I0510 10:05:17.227533 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-host-proc-sys-kernel\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227812 kubelet[2860]: I0510 10:05:17.227553 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66289a83-7c18-4f27-b8a0-7c660226cd13-cilium-config-path\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227812 kubelet[2860]: I0510 10:05:17.227578 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/66289a83-7c18-4f27-b8a0-7c660226cd13-hubble-tls\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227812 kubelet[2860]: I0510 10:05:17.227648 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/66289a83-7c18-4f27-b8a0-7c660226cd13-cilium-ipsec-secrets\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227812 kubelet[2860]: I0510 10:05:17.227681 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-xtables-lock\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227812 kubelet[2860]: I0510 10:05:17.227747 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/66289a83-7c18-4f27-b8a0-7c660226cd13-clustermesh-secrets\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.227812 kubelet[2860]: I0510 10:05:17.227782 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/66289a83-7c18-4f27-b8a0-7c660226cd13-cilium-run\") pod \"cilium-2mg26\" (UID: \"66289a83-7c18-4f27-b8a0-7c660226cd13\") " pod="kube-system/cilium-2mg26" May 10 10:05:17.319554 sshd[4558]: Connection closed by 172.24.4.1 port 37404 May 10 10:05:17.319226 sshd-session[4555]: pam_unix(sshd:session): session closed for user core May 10 10:05:17.338345 systemd[1]: sshd@25-172.24.4.22:22-172.24.4.1:37404.service: Deactivated successfully. May 10 10:05:17.352756 systemd[1]: session-28.scope: Deactivated successfully. May 10 10:05:17.404490 systemd-logind[1468]: Session 28 logged out. Waiting for processes to exit. May 10 10:05:17.413693 systemd[1]: Started sshd@26-172.24.4.22:22-172.24.4.1:37406.service - OpenSSH per-connection server daemon (172.24.4.1:37406). May 10 10:05:17.417682 systemd-logind[1468]: Removed session 28. May 10 10:05:17.442531 containerd[1491]: time="2025-05-10T10:05:17.442478324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mg26,Uid:66289a83-7c18-4f27-b8a0-7c660226cd13,Namespace:kube-system,Attempt:0,}" May 10 10:05:17.469579 containerd[1491]: time="2025-05-10T10:05:17.469437316Z" level=info msg="connecting to shim 39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6" address="unix:///run/containerd/s/b2bc610a13c7ffd6c5181934c5d4328add30311d90cf47bc5fcf8b6a89767f1e" namespace=k8s.io protocol=ttrpc version=3 May 10 10:05:17.500538 systemd[1]: Started cri-containerd-39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6.scope - libcontainer container 39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6. May 10 10:05:17.533752 containerd[1491]: time="2025-05-10T10:05:17.533662319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mg26,Uid:66289a83-7c18-4f27-b8a0-7c660226cd13,Namespace:kube-system,Attempt:0,} returns sandbox id \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\"" May 10 10:05:17.540824 containerd[1491]: time="2025-05-10T10:05:17.539419784Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 10:05:17.548232 containerd[1491]: time="2025-05-10T10:05:17.548174795Z" level=info msg="Container bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920: CDI devices from CRI Config.CDIDevices: []" May 10 10:05:17.558952 containerd[1491]: time="2025-05-10T10:05:17.558903863Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920\"" May 10 10:05:17.559748 containerd[1491]: time="2025-05-10T10:05:17.559724180Z" level=info msg="StartContainer for \"bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920\"" May 10 10:05:17.561169 containerd[1491]: time="2025-05-10T10:05:17.561141004Z" level=info msg="connecting to shim bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920" address="unix:///run/containerd/s/b2bc610a13c7ffd6c5181934c5d4328add30311d90cf47bc5fcf8b6a89767f1e" protocol=ttrpc version=3 May 10 10:05:17.580572 systemd[1]: Started cri-containerd-bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920.scope - libcontainer container bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920. May 10 10:05:17.625805 containerd[1491]: time="2025-05-10T10:05:17.625734387Z" level=info msg="StartContainer for \"bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920\" returns successfully" May 10 10:05:17.640194 systemd[1]: cri-containerd-bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920.scope: Deactivated successfully. May 10 10:05:17.644482 containerd[1491]: time="2025-05-10T10:05:17.644272384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920\" id:\"bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920\" pid:4630 exited_at:{seconds:1746871517 nanos:643565591}" May 10 10:05:17.644482 containerd[1491]: time="2025-05-10T10:05:17.644278516Z" level=info msg="received exit event container_id:\"bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920\" id:\"bdc5b6a99c7b78aba736fa27a13ff9da8625041c31df27c81cd9ab8b645f2920\" pid:4630 exited_at:{seconds:1746871517 nanos:643565591}" May 10 10:05:18.541926 sshd[4571]: Accepted publickey for core from 172.24.4.1 port 37406 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:05:18.545868 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:05:18.565784 systemd-logind[1468]: New session 29 of user core. May 10 10:05:18.575783 systemd[1]: Started session-29.scope - Session 29 of User core. May 10 10:05:18.703776 containerd[1491]: time="2025-05-10T10:05:18.703669302Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 10:05:18.727422 containerd[1491]: time="2025-05-10T10:05:18.727178563Z" level=info msg="Container e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0: CDI devices from CRI Config.CDIDevices: []" May 10 10:05:18.763515 containerd[1491]: time="2025-05-10T10:05:18.762612753Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0\"" May 10 10:05:18.764561 containerd[1491]: time="2025-05-10T10:05:18.764304312Z" level=info msg="StartContainer for \"e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0\"" May 10 10:05:18.766333 containerd[1491]: time="2025-05-10T10:05:18.766237864Z" level=info msg="connecting to shim e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0" address="unix:///run/containerd/s/b2bc610a13c7ffd6c5181934c5d4328add30311d90cf47bc5fcf8b6a89767f1e" protocol=ttrpc version=3 May 10 10:05:18.799501 systemd[1]: Started cri-containerd-e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0.scope - libcontainer container e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0. May 10 10:05:18.841484 containerd[1491]: time="2025-05-10T10:05:18.841440964Z" level=info msg="StartContainer for \"e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0\" returns successfully" May 10 10:05:18.851210 systemd[1]: cri-containerd-e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0.scope: Deactivated successfully. May 10 10:05:18.851577 containerd[1491]: time="2025-05-10T10:05:18.851457708Z" level=info msg="received exit event container_id:\"e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0\" id:\"e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0\" pid:4675 exited_at:{seconds:1746871518 nanos:851173516}" May 10 10:05:18.852147 containerd[1491]: time="2025-05-10T10:05:18.851990747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0\" id:\"e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0\" pid:4675 exited_at:{seconds:1746871518 nanos:851173516}" May 10 10:05:18.875905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e20547167feea5016837154c10710a9d576de0fc593e5663d3eee4b85226caa0-rootfs.mount: Deactivated successfully. May 10 10:05:19.164552 sshd[4662]: Connection closed by 172.24.4.1 port 37406 May 10 10:05:19.166240 sshd-session[4571]: pam_unix(sshd:session): session closed for user core May 10 10:05:19.186553 systemd[1]: sshd@26-172.24.4.22:22-172.24.4.1:37406.service: Deactivated successfully. May 10 10:05:19.191016 systemd[1]: session-29.scope: Deactivated successfully. May 10 10:05:19.198972 systemd-logind[1468]: Session 29 logged out. Waiting for processes to exit. May 10 10:05:19.202254 systemd[1]: Started sshd@27-172.24.4.22:22-172.24.4.1:37416.service - OpenSSH per-connection server daemon (172.24.4.1:37416). May 10 10:05:19.208343 systemd-logind[1468]: Removed session 29. May 10 10:05:19.713164 containerd[1491]: time="2025-05-10T10:05:19.712948133Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 10:05:19.750853 containerd[1491]: time="2025-05-10T10:05:19.750773763Z" level=info msg="Container f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5: CDI devices from CRI Config.CDIDevices: []" May 10 10:05:19.783738 containerd[1491]: time="2025-05-10T10:05:19.782685856Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5\"" May 10 10:05:19.784683 containerd[1491]: time="2025-05-10T10:05:19.784503701Z" level=info msg="StartContainer for \"f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5\"" May 10 10:05:19.796809 containerd[1491]: time="2025-05-10T10:05:19.796629237Z" level=info msg="connecting to shim f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5" address="unix:///run/containerd/s/b2bc610a13c7ffd6c5181934c5d4328add30311d90cf47bc5fcf8b6a89767f1e" protocol=ttrpc version=3 May 10 10:05:19.833057 systemd[1]: Started cri-containerd-f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5.scope - libcontainer container f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5. May 10 10:05:19.882823 systemd[1]: cri-containerd-f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5.scope: Deactivated successfully. May 10 10:05:19.886945 containerd[1491]: time="2025-05-10T10:05:19.886901709Z" level=info msg="received exit event container_id:\"f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5\" id:\"f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5\" pid:4728 exited_at:{seconds:1746871519 nanos:886682578}" May 10 10:05:19.888485 containerd[1491]: time="2025-05-10T10:05:19.887394452Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5\" id:\"f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5\" pid:4728 exited_at:{seconds:1746871519 nanos:886682578}" May 10 10:05:19.888485 containerd[1491]: time="2025-05-10T10:05:19.887595810Z" level=info msg="StartContainer for \"f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5\" returns successfully" May 10 10:05:19.914961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f71466981793155256105a4a5ca01c0392914f83227aa40ba3534a7379fcd8f5-rootfs.mount: Deactivated successfully. May 10 10:05:20.509197 sshd[4711]: Accepted publickey for core from 172.24.4.1 port 37416 ssh2: RSA SHA256:s763iqE5ZQO2n9I9yHPInO5+M518XrNVWKB/LWGB6zk May 10 10:05:20.511518 sshd-session[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 10:05:20.520350 systemd-logind[1468]: New session 30 of user core. May 10 10:05:20.528638 systemd[1]: Started session-30.scope - Session 30 of User core. May 10 10:05:20.730586 containerd[1491]: time="2025-05-10T10:05:20.729221033Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 10:05:20.752944 containerd[1491]: time="2025-05-10T10:05:20.752830643Z" level=info msg="Container f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97: CDI devices from CRI Config.CDIDevices: []" May 10 10:05:20.786206 containerd[1491]: time="2025-05-10T10:05:20.785137337Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97\"" May 10 10:05:20.790730 containerd[1491]: time="2025-05-10T10:05:20.790636869Z" level=info msg="StartContainer for \"f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97\"" May 10 10:05:20.798169 containerd[1491]: time="2025-05-10T10:05:20.798101042Z" level=info msg="connecting to shim f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97" address="unix:///run/containerd/s/b2bc610a13c7ffd6c5181934c5d4328add30311d90cf47bc5fcf8b6a89767f1e" protocol=ttrpc version=3 May 10 10:05:20.833557 systemd[1]: Started cri-containerd-f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97.scope - libcontainer container f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97. May 10 10:05:20.864733 systemd[1]: cri-containerd-f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97.scope: Deactivated successfully. May 10 10:05:20.866740 containerd[1491]: time="2025-05-10T10:05:20.866621998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97\" id:\"f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97\" pid:4768 exited_at:{seconds:1746871520 nanos:864979461}" May 10 10:05:20.871862 containerd[1491]: time="2025-05-10T10:05:20.871742521Z" level=info msg="received exit event container_id:\"f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97\" id:\"f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97\" pid:4768 exited_at:{seconds:1746871520 nanos:864979461}" May 10 10:05:20.882095 containerd[1491]: time="2025-05-10T10:05:20.882004225Z" level=info msg="StartContainer for \"f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97\" returns successfully" May 10 10:05:20.896153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f29243c90ffc3facc8a7a2ace3b10b94b28e11373889ad98d22227d1f1145a97-rootfs.mount: Deactivated successfully. May 10 10:05:21.570539 kubelet[2860]: E0510 10:05:21.570348 2860 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 10:05:21.748461 containerd[1491]: time="2025-05-10T10:05:21.746719618Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 10:05:21.777432 containerd[1491]: time="2025-05-10T10:05:21.775745184Z" level=info msg="Container 396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c: CDI devices from CRI Config.CDIDevices: []" May 10 10:05:21.814712 containerd[1491]: time="2025-05-10T10:05:21.814596830Z" level=info msg="CreateContainer within sandbox \"39d349bc0077a6b4c99de3dfef96cea51f383066dcf5d25e1363b247604340a6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\"" May 10 10:05:21.818771 containerd[1491]: time="2025-05-10T10:05:21.818712409Z" level=info msg="StartContainer for \"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\"" May 10 10:05:21.823734 containerd[1491]: time="2025-05-10T10:05:21.823518423Z" level=info msg="connecting to shim 396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c" address="unix:///run/containerd/s/b2bc610a13c7ffd6c5181934c5d4328add30311d90cf47bc5fcf8b6a89767f1e" protocol=ttrpc version=3 May 10 10:05:21.854586 systemd[1]: Started cri-containerd-396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c.scope - libcontainer container 396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c. May 10 10:05:21.900064 containerd[1491]: time="2025-05-10T10:05:21.900028268Z" level=info msg="StartContainer for \"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\" returns successfully" May 10 10:05:22.014196 containerd[1491]: time="2025-05-10T10:05:22.014153211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\" id:\"0e0f209a5c173ff9711d9d24dc9ea4ba7b6e37e49232c3f38d0f59f9681d9fa8\" pid:4839 exited_at:{seconds:1746871522 nanos:13784239}" May 10 10:05:22.306414 kernel: cryptd: max_cpu_qlen set to 1000 May 10 10:05:22.361404 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 10 10:05:22.817719 kubelet[2860]: I0510 10:05:22.817446 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2mg26" podStartSLOduration=5.817174849 podStartE2EDuration="5.817174849s" podCreationTimestamp="2025-05-10 10:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 10:05:22.80733188 +0000 UTC m=+291.816212719" watchObservedRunningTime="2025-05-10 10:05:22.817174849 +0000 UTC m=+291.826055659" May 10 10:05:23.231402 containerd[1491]: time="2025-05-10T10:05:23.231291927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\" id:\"7383ac329462a2282936b42a38a081ff6c421a738df440b54467ef0737762a42\" pid:4968 exit_status:1 exited_at:{seconds:1746871523 nanos:230917005}" May 10 10:05:25.383413 containerd[1491]: time="2025-05-10T10:05:25.383232750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\" id:\"d55fdba3ceef96c79594f8e79e91bc1299eb2d1159946149f5981d82980bc48d\" pid:5322 exit_status:1 exited_at:{seconds:1746871525 nanos:382635751}" May 10 10:05:25.627035 systemd-networkd[1419]: lxc_health: Link UP May 10 10:05:25.635156 systemd-networkd[1419]: lxc_health: Gained carrier May 10 10:05:25.927480 containerd[1491]: time="2025-05-10T10:05:25.927239382Z" level=warning msg="container event discarded" container=4df4d3d481fd8465d0ddc16d0a323166b93e53b321916ad98171e184e66c65a7 type=CONTAINER_CREATED_EVENT May 10 10:05:25.939681 containerd[1491]: time="2025-05-10T10:05:25.938557908Z" level=warning msg="container event discarded" container=4df4d3d481fd8465d0ddc16d0a323166b93e53b321916ad98171e184e66c65a7 type=CONTAINER_STARTED_EVENT May 10 10:05:25.963957 containerd[1491]: time="2025-05-10T10:05:25.963837029Z" level=warning msg="container event discarded" container=8b36cada24d1025200a3089403edec560e87f6dcd7ddadaa5170dd58b055374e type=CONTAINER_CREATED_EVENT May 10 10:05:25.963957 containerd[1491]: time="2025-05-10T10:05:25.963884548Z" level=warning msg="container event discarded" container=8b36cada24d1025200a3089403edec560e87f6dcd7ddadaa5170dd58b055374e type=CONTAINER_STARTED_EVENT May 10 10:05:25.963957 containerd[1491]: time="2025-05-10T10:05:25.963899666Z" level=warning msg="container event discarded" container=3511290f84cdb5a8a4b511c0aee9119334db481359d52c4e34c0778d86cce0b0 type=CONTAINER_CREATED_EVENT May 10 10:05:25.963957 containerd[1491]: time="2025-05-10T10:05:25.963925194Z" level=warning msg="container event discarded" container=3511290f84cdb5a8a4b511c0aee9119334db481359d52c4e34c0778d86cce0b0 type=CONTAINER_STARTED_EVENT May 10 10:05:25.981181 containerd[1491]: time="2025-05-10T10:05:25.981124387Z" level=warning msg="container event discarded" container=eb3eafe221b921be0322c9a1482911e90ab2118e76f123801aeaeb5fd9bdcc27 type=CONTAINER_CREATED_EVENT May 10 10:05:26.008415 containerd[1491]: time="2025-05-10T10:05:26.008299310Z" level=warning msg="container event discarded" container=6cbe12b51f3cfb3905a87a9123b37397f6a85f2346dac9692f1e44d312ce0f08 type=CONTAINER_CREATED_EVENT May 10 10:05:26.008415 containerd[1491]: time="2025-05-10T10:05:26.008349003Z" level=warning msg="container event discarded" container=4b36bf06c42bd026a27de688fa656da513b65c7559193656d5704222b415f7ed type=CONTAINER_CREATED_EVENT May 10 10:05:26.087707 containerd[1491]: time="2025-05-10T10:05:26.087636541Z" level=warning msg="container event discarded" container=eb3eafe221b921be0322c9a1482911e90ab2118e76f123801aeaeb5fd9bdcc27 type=CONTAINER_STARTED_EVENT May 10 10:05:26.138271 containerd[1491]: time="2025-05-10T10:05:26.138197890Z" level=warning msg="container event discarded" container=6cbe12b51f3cfb3905a87a9123b37397f6a85f2346dac9692f1e44d312ce0f08 type=CONTAINER_STARTED_EVENT May 10 10:05:26.160490 containerd[1491]: time="2025-05-10T10:05:26.160426175Z" level=warning msg="container event discarded" container=4b36bf06c42bd026a27de688fa656da513b65c7559193656d5704222b415f7ed type=CONTAINER_STARTED_EVENT May 10 10:05:27.015628 systemd-networkd[1419]: lxc_health: Gained IPv6LL May 10 10:05:27.689847 containerd[1491]: time="2025-05-10T10:05:27.689633587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\" id:\"9d9be371c86c1fafc3e692936d12a9ed2f01506d76b20474380aea37e370d443\" pid:5446 exited_at:{seconds:1746871527 nanos:688740163}" May 10 10:05:30.098194 containerd[1491]: time="2025-05-10T10:05:30.098016326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\" id:\"71cfd6ca840b26102d9cf7a50a19d87576d3a620425c649345228b6f737239ea\" pid:5471 exited_at:{seconds:1746871530 nanos:97579237}" May 10 10:05:31.391137 containerd[1491]: time="2025-05-10T10:05:31.389343585Z" level=info msg="StopPodSandbox for \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\"" May 10 10:05:31.391137 containerd[1491]: time="2025-05-10T10:05:31.389754735Z" level=info msg="TearDown network for sandbox \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" successfully" May 10 10:05:31.391137 containerd[1491]: time="2025-05-10T10:05:31.389793618Z" level=info msg="StopPodSandbox for \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" returns successfully" May 10 10:05:31.392607 containerd[1491]: time="2025-05-10T10:05:31.392474621Z" level=info msg="RemovePodSandbox for \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\"" May 10 10:05:31.392607 containerd[1491]: time="2025-05-10T10:05:31.392576062Z" level=info msg="Forcibly stopping sandbox \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\"" May 10 10:05:31.393219 containerd[1491]: time="2025-05-10T10:05:31.392793640Z" level=info msg="TearDown network for sandbox \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" successfully" May 10 10:05:31.399063 containerd[1491]: time="2025-05-10T10:05:31.398924818Z" level=info msg="Ensure that sandbox b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e in task-service has been cleanup successfully" May 10 10:05:31.415971 containerd[1491]: time="2025-05-10T10:05:31.415850700Z" level=info msg="RemovePodSandbox \"b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e\" returns successfully" May 10 10:05:31.417737 containerd[1491]: time="2025-05-10T10:05:31.417047573Z" level=info msg="StopPodSandbox for \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\"" May 10 10:05:31.418810 containerd[1491]: time="2025-05-10T10:05:31.418549677Z" level=info msg="TearDown network for sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" successfully" May 10 10:05:31.418810 containerd[1491]: time="2025-05-10T10:05:31.418649144Z" level=info msg="StopPodSandbox for \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" returns successfully" May 10 10:05:31.421747 containerd[1491]: time="2025-05-10T10:05:31.420077520Z" level=info msg="RemovePodSandbox for \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\"" May 10 10:05:31.421747 containerd[1491]: time="2025-05-10T10:05:31.420144595Z" level=info msg="Forcibly stopping sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\"" May 10 10:05:31.421747 containerd[1491]: time="2025-05-10T10:05:31.420331546Z" level=info msg="TearDown network for sandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" successfully" May 10 10:05:31.428919 containerd[1491]: time="2025-05-10T10:05:31.428755519Z" level=info msg="Ensure that sandbox b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2 in task-service has been cleanup successfully" May 10 10:05:31.466007 containerd[1491]: time="2025-05-10T10:05:31.465842979Z" level=info msg="RemovePodSandbox \"b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2\" returns successfully" May 10 10:05:32.301437 containerd[1491]: time="2025-05-10T10:05:32.301314130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\" id:\"068274b765fdd7d60baa8f2b8515d604b4a2cd9a6a1efab1dcf65dcebe7f616d\" pid:5503 exited_at:{seconds:1746871532 nanos:300830013}" May 10 10:05:32.306179 kubelet[2860]: E0510 10:05:32.305926 2860 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43556->127.0.0.1:42177: write tcp 127.0.0.1:43556->127.0.0.1:42177: write: broken pipe May 10 10:05:32.306179 kubelet[2860]: E0510 10:05:32.305844 2860 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:43556->127.0.0.1:42177: read tcp 127.0.0.1:43556->127.0.0.1:42177: read: connection reset by peer May 10 10:05:34.555392 containerd[1491]: time="2025-05-10T10:05:34.555316836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"396deaf975e7d2005abf49804d908b2725e7d9730ae57245ce321aab2feeaa0c\" id:\"80f20ee1a0c71e0bcf4a3d80082aa026492fb4563a577ac6ea204be3bbeef0fd\" pid:5525 exited_at:{seconds:1746871534 nanos:554414506}" May 10 10:05:34.829016 sshd[4755]: Connection closed by 172.24.4.1 port 37416 May 10 10:05:34.828505 sshd-session[4711]: pam_unix(sshd:session): session closed for user core May 10 10:05:34.838523 systemd[1]: sshd@27-172.24.4.22:22-172.24.4.1:37416.service: Deactivated successfully. May 10 10:05:34.848092 systemd[1]: session-30.scope: Deactivated successfully. May 10 10:05:34.852057 systemd-logind[1468]: Session 30 logged out. Waiting for processes to exit. May 10 10:05:34.858309 systemd-logind[1468]: Removed session 30. May 10 10:05:49.029165 containerd[1491]: time="2025-05-10T10:05:49.028953156Z" level=warning msg="container event discarded" container=b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2 type=CONTAINER_CREATED_EVENT May 10 10:05:49.029165 containerd[1491]: time="2025-05-10T10:05:49.029145797Z" level=warning msg="container event discarded" container=b675babd66b94d325da2ecb4e6de5918c4b0fdf0d4c875f91b38353642e141c2 type=CONTAINER_STARTED_EVENT May 10 10:05:49.029165 containerd[1491]: time="2025-05-10T10:05:49.029177637Z" level=warning msg="container event discarded" container=3d3267607487f3d83c6ea3c852ec2d378d9942387a3b7f27430957fb3c692ade type=CONTAINER_CREATED_EVENT May 10 10:05:49.029165 containerd[1491]: time="2025-05-10T10:05:49.029203074Z" level=warning msg="container event discarded" container=3d3267607487f3d83c6ea3c852ec2d378d9942387a3b7f27430957fb3c692ade type=CONTAINER_STARTED_EVENT May 10 10:05:49.070625 containerd[1491]: time="2025-05-10T10:05:49.070553205Z" level=warning msg="container event discarded" container=32c44cca0b6ae0c57561a7f6780537ccda20bd7c736350da3f238b13d8b55c76 type=CONTAINER_CREATED_EVENT May 10 10:05:49.082104 containerd[1491]: time="2025-05-10T10:05:49.082002210Z" level=warning msg="container event discarded" container=b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e type=CONTAINER_CREATED_EVENT May 10 10:05:49.082104 containerd[1491]: time="2025-05-10T10:05:49.082089123Z" level=warning msg="container event discarded" container=b79958dcc1cf9bbf43f9cea39492fb16ac65e9cbdc2c9cf491da1d807ed6581e type=CONTAINER_STARTED_EVENT May 10 10:05:49.152450 containerd[1491]: time="2025-05-10T10:05:49.152288407Z" level=warning msg="container event discarded" container=32c44cca0b6ae0c57561a7f6780537ccda20bd7c736350da3f238b13d8b55c76 type=CONTAINER_STARTED_EVENT May 10 10:05:57.916680 containerd[1491]: time="2025-05-10T10:05:57.916471139Z" level=warning msg="container event discarded" container=b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55 type=CONTAINER_CREATED_EVENT May 10 10:05:57.998182 containerd[1491]: time="2025-05-10T10:05:57.997990727Z" level=warning msg="container event discarded" container=b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55 type=CONTAINER_STARTED_EVENT May 10 10:05:59.599595 containerd[1491]: time="2025-05-10T10:05:59.599359219Z" level=warning msg="container event discarded" container=b38f1e8da1de9e56e6519ae708b769fd2ed9961296f59f7146cb0e1922612f55 type=CONTAINER_STOPPED_EVENT May 10 10:06:00.570186 containerd[1491]: time="2025-05-10T10:06:00.569976713Z" level=warning msg="container event discarded" container=3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6 type=CONTAINER_CREATED_EVENT May 10 10:06:00.655203 containerd[1491]: time="2025-05-10T10:06:00.654697324Z" level=warning msg="container event discarded" container=3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6 type=CONTAINER_STARTED_EVENT May 10 10:06:00.724247 containerd[1491]: time="2025-05-10T10:06:00.724052447Z" level=warning msg="container event discarded" container=3c30e87913d7636b44bffa4ec1ffa8e40c99b7afcedaabbe507be7547e9c13e6 type=CONTAINER_STOPPED_EVENT May 10 10:06:01.570751 containerd[1491]: time="2025-05-10T10:06:01.570608372Z" level=warning msg="container event discarded" container=bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098 type=CONTAINER_CREATED_EVENT May 10 10:06:01.676546 containerd[1491]: time="2025-05-10T10:06:01.676405306Z" level=warning msg="container event discarded" container=bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098 type=CONTAINER_STARTED_EVENT May 10 10:06:01.813055 containerd[1491]: time="2025-05-10T10:06:01.812894854Z" level=warning msg="container event discarded" container=bdad80e925ea5e54c1b893bd566edfd9e06af99a02c94f53f5f4ab6d927f4098 type=CONTAINER_STOPPED_EVENT May 10 10:06:02.293477 containerd[1491]: time="2025-05-10T10:06:02.293289289Z" level=warning msg="container event discarded" container=4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53 type=CONTAINER_CREATED_EVENT May 10 10:06:02.351166 containerd[1491]: time="2025-05-10T10:06:02.351052580Z" level=warning msg="container event discarded" container=4f3c65f59c97d8d48b0d7aef2564059d24984b0a0bffb04c151f2a4968e85c53 type=CONTAINER_STARTED_EVENT May 10 10:06:02.558411 containerd[1491]: time="2025-05-10T10:06:02.557720327Z" level=warning msg="container event discarded" container=02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857 type=CONTAINER_CREATED_EVENT May 10 10:06:02.703812 containerd[1491]: time="2025-05-10T10:06:02.703622258Z" level=warning msg="container event discarded" container=02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857 type=CONTAINER_STARTED_EVENT May 10 10:06:02.983568 containerd[1491]: time="2025-05-10T10:06:02.983358472Z" level=warning msg="container event discarded" container=02f7c0c79c146df408d6e2ef7176b061f198fa7aab61037b008178e29bc49857 type=CONTAINER_STOPPED_EVENT May 10 10:06:03.612601 containerd[1491]: time="2025-05-10T10:06:03.612335102Z" level=warning msg="container event discarded" container=70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a type=CONTAINER_CREATED_EVENT May 10 10:06:03.681877 containerd[1491]: time="2025-05-10T10:06:03.681768756Z" level=warning msg="container event discarded" container=70428e9f22fc26880eae31ffe1cdc0b0212a78b7468537f0f990f33a4499d56a type=CONTAINER_STARTED_EVENT May 10 10:06:13.050646 containerd[1491]: time="2025-05-10T10:06:13.050541429Z" level=warning msg="container event discarded" container=dec72256aa53045ae2b6236dee155d7313f64263dfd6f2a9f698cb9acf5c1031 type=CONTAINER_CREATED_EVENT May 10 10:06:13.050646 containerd[1491]: time="2025-05-10T10:06:13.050626639Z" level=warning msg="container event discarded" container=dec72256aa53045ae2b6236dee155d7313f64263dfd6f2a9f698cb9acf5c1031 type=CONTAINER_STARTED_EVENT May 10 10:06:13.081140 containerd[1491]: time="2025-05-10T10:06:13.080992896Z" level=warning msg="container event discarded" container=3d040acf0ad20bc72b7892abd9baa10dfa655406c267f247855d52022aa32c9b type=CONTAINER_CREATED_EVENT May 10 10:06:13.095530 containerd[1491]: time="2025-05-10T10:06:13.095432009Z" level=warning msg="container event discarded" container=deb9a3c307a2623373dbfbef226d243cfd02a51934d7d444a5af79568c740b1e type=CONTAINER_CREATED_EVENT May 10 10:06:13.095740 containerd[1491]: time="2025-05-10T10:06:13.095537297Z" level=warning msg="container event discarded" container=deb9a3c307a2623373dbfbef226d243cfd02a51934d7d444a5af79568c740b1e type=CONTAINER_STARTED_EVENT May 10 10:06:13.128309 containerd[1491]: time="2025-05-10T10:06:13.128188997Z" level=warning msg="container event discarded" container=592cedeb04d88156ca2ec88e58d19ac3366b025a8b3e4104b86cb47f8de2ccea type=CONTAINER_CREATED_EVENT May 10 10:06:13.169771 containerd[1491]: time="2025-05-10T10:06:13.169646783Z" level=warning msg="container event discarded" container=3d040acf0ad20bc72b7892abd9baa10dfa655406c267f247855d52022aa32c9b type=CONTAINER_STARTED_EVENT May 10 10:06:13.206088 containerd[1491]: time="2025-05-10T10:06:13.205969304Z" level=warning msg="container event discarded" container=592cedeb04d88156ca2ec88e58d19ac3366b025a8b3e4104b86cb47f8de2ccea type=CONTAINER_STARTED_EVENT May 10 10:06:32.839204 update_engine[1471]: I20250510 10:06:32.838716 1471 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 10 10:06:32.839204 update_engine[1471]: I20250510 10:06:32.839100 1471 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 10 10:06:32.842001 update_engine[1471]: I20250510 10:06:32.840147 1471 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 10 10:06:32.844432 update_engine[1471]: I20250510 10:06:32.843158 1471 omaha_request_params.cc:62] Current group set to developer May 10 10:06:32.844432 update_engine[1471]: I20250510 10:06:32.843950 1471 update_attempter.cc:499] Already updated boot flags. Skipping. May 10 10:06:32.844432 update_engine[1471]: I20250510 10:06:32.843986 1471 update_attempter.cc:643] Scheduling an action processor start. May 10 10:06:32.844432 update_engine[1471]: I20250510 10:06:32.844062 1471 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 10:06:32.845356 update_engine[1471]: I20250510 10:06:32.845005 1471 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 10 10:06:32.846467 update_engine[1471]: I20250510 10:06:32.845763 1471 omaha_request_action.cc:271] Posting an Omaha request to disabled May 10 10:06:32.846467 update_engine[1471]: I20250510 10:06:32.845836 1471 omaha_request_action.cc:272] Request: May 10 10:06:32.846467 update_engine[1471]: May 10 10:06:32.846467 update_engine[1471]: May 10 10:06:32.846467 update_engine[1471]: May 10 10:06:32.846467 update_engine[1471]: May 10 10:06:32.846467 update_engine[1471]: May 10 10:06:32.846467 update_engine[1471]: May 10 10:06:32.846467 update_engine[1471]: May 10 10:06:32.846467 update_engine[1471]: May 10 10:06:32.846467 update_engine[1471]: I20250510 10:06:32.845865 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 10:06:32.848951 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 10 10:06:32.852785 update_engine[1471]: I20250510 10:06:32.852712 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 10:06:32.854124 update_engine[1471]: I20250510 10:06:32.853964 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 10:06:32.861809 update_engine[1471]: E20250510 10:06:32.861712 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 10:06:32.861992 update_engine[1471]: I20250510 10:06:32.861923 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 10 10:06:42.797886 update_engine[1471]: I20250510 10:06:42.797340 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 10:06:42.800845 update_engine[1471]: I20250510 10:06:42.798898 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 10:06:42.800845 update_engine[1471]: I20250510 10:06:42.800143 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 10:06:42.805231 update_engine[1471]: E20250510 10:06:42.805124 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 10:06:42.805433 update_engine[1471]: I20250510 10:06:42.805305 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 10 10:06:52.797408 update_engine[1471]: I20250510 10:06:52.797180 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 10:06:52.798423 update_engine[1471]: I20250510 10:06:52.797864 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 10:06:52.798736 update_engine[1471]: I20250510 10:06:52.798533 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 10:06:52.803963 update_engine[1471]: E20250510 10:06:52.803866 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 10:06:52.804139 update_engine[1471]: I20250510 10:06:52.804018 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 10 10:07:02.795619 update_engine[1471]: I20250510 10:07:02.795317 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 10:07:02.796590 update_engine[1471]: I20250510 10:07:02.796224 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 10:07:02.797119 update_engine[1471]: I20250510 10:07:02.796983 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 10:07:02.802169 update_engine[1471]: E20250510 10:07:02.802073 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 10:07:02.802529 update_engine[1471]: I20250510 10:07:02.802181 1471 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 10:07:02.802529 update_engine[1471]: I20250510 10:07:02.802239 1471 omaha_request_action.cc:617] Omaha request response: May 10 10:07:02.802737 update_engine[1471]: E20250510 10:07:02.802627 1471 omaha_request_action.cc:636] Omaha request network transfer failed. May 10 10:07:02.803256 update_engine[1471]: I20250510 10:07:02.803169 1471 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 10 10:07:02.803256 update_engine[1471]: I20250510 10:07:02.803202 1471 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 10:07:02.803591 update_engine[1471]: I20250510 10:07:02.803253 1471 update_attempter.cc:306] Processing Done. May 10 10:07:02.803591 update_engine[1471]: E20250510 10:07:02.803467 1471 update_attempter.cc:619] Update failed. May 10 10:07:02.803591 update_engine[1471]: I20250510 10:07:02.803517 1471 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 10 10:07:02.803591 update_engine[1471]: I20250510 10:07:02.803531 1471 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 10 10:07:02.803591 update_engine[1471]: I20250510 10:07:02.803545 1471 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 10 10:07:02.804246 update_engine[1471]: I20250510 10:07:02.804129 1471 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 10:07:02.804694 update_engine[1471]: I20250510 10:07:02.804286 1471 omaha_request_action.cc:271] Posting an Omaha request to disabled May 10 10:07:02.804694 update_engine[1471]: I20250510 10:07:02.804307 1471 omaha_request_action.cc:272] Request: May 10 10:07:02.804694 update_engine[1471]: May 10 10:07:02.804694 update_engine[1471]: May 10 10:07:02.804694 update_engine[1471]: May 10 10:07:02.804694 update_engine[1471]: May 10 10:07:02.804694 update_engine[1471]: May 10 10:07:02.804694 update_engine[1471]: May 10 10:07:02.804694 update_engine[1471]: I20250510 10:07:02.804322 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 10:07:02.806198 update_engine[1471]: I20250510 10:07:02.805913 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 10:07:02.807441 update_engine[1471]: I20250510 10:07:02.806666 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 10:07:02.808784 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 10 10:07:02.811852 update_engine[1471]: E20250510 10:07:02.811733 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 10:07:02.811852 update_engine[1471]: I20250510 10:07:02.811838 1471 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 10:07:02.811852 update_engine[1471]: I20250510 10:07:02.811859 1471 omaha_request_action.cc:617] Omaha request response: May 10 10:07:02.812266 update_engine[1471]: I20250510 10:07:02.811878 1471 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 10:07:02.812266 update_engine[1471]: I20250510 10:07:02.811892 1471 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 10:07:02.812266 update_engine[1471]: I20250510 10:07:02.811905 1471 update_attempter.cc:306] Processing Done. May 10 10:07:02.812266 update_engine[1471]: I20250510 10:07:02.811920 1471 update_attempter.cc:310] Error event sent. May 10 10:07:02.812266 update_engine[1471]: I20250510 10:07:02.811975 1471 update_check_scheduler.cc:74] Next update check in 44m43s May 10 10:07:02.813310 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0