May 13 03:38:11.035406 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:20:27 -00 2025 May 13 03:38:11.035434 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 13 03:38:11.035444 kernel: BIOS-provided physical RAM map: May 13 03:38:11.035452 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 03:38:11.035459 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 03:38:11.035468 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 03:38:11.035477 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 03:38:11.035485 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 03:38:11.035492 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 03:38:11.035500 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 03:38:11.035507 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 03:38:11.035515 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 03:38:11.035522 kernel: NX (Execute Disable) protection: active May 13 03:38:11.035530 kernel: APIC: Static calls initialized May 13 03:38:11.035541 kernel: SMBIOS 3.0.0 present. May 13 03:38:11.035549 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 03:38:11.035557 kernel: Hypervisor detected: KVM May 13 03:38:11.035565 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 03:38:11.035572 kernel: kvm-clock: using sched offset of 3593098101 cycles May 13 03:38:11.035581 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 03:38:11.035591 kernel: tsc: Detected 1996.249 MHz processor May 13 03:38:11.035599 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 03:38:11.035626 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 03:38:11.035634 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 03:38:11.035643 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 03:38:11.035651 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 03:38:11.035659 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 03:38:11.035667 kernel: ACPI: Early table checksum verification disabled May 13 03:38:11.035677 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 03:38:11.035686 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 03:38:11.035694 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 03:38:11.035702 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 03:38:11.035710 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 03:38:11.035719 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 03:38:11.035727 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 03:38:11.035735 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 03:38:11.035743 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 03:38:11.035753 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 03:38:11.035761 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 03:38:11.035770 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 03:38:11.035781 kernel: No NUMA configuration found May 13 03:38:11.035790 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 03:38:11.035798 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 13 03:38:11.035807 kernel: Zone ranges: May 13 03:38:11.035817 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 03:38:11.035826 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 03:38:11.035835 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 03:38:11.035843 kernel: Movable zone start for each node May 13 03:38:11.035852 kernel: Early memory node ranges May 13 03:38:11.035860 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 03:38:11.035869 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 03:38:11.035877 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 03:38:11.035887 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 03:38:11.035896 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 03:38:11.035904 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 03:38:11.035913 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 03:38:11.035921 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 03:38:11.035930 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 03:38:11.035939 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 03:38:11.035947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 03:38:11.035956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 03:38:11.035967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 03:38:11.035975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 03:38:11.035984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 03:38:11.035993 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 03:38:11.036001 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 03:38:11.036010 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 03:38:11.036018 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 03:38:11.036027 kernel: Booting paravirtualized kernel on KVM May 13 03:38:11.036035 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 03:38:11.036046 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 03:38:11.036055 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 03:38:11.036063 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 03:38:11.036072 kernel: pcpu-alloc: [0] 0 1 May 13 03:38:11.036080 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 03:38:11.036090 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 13 03:38:11.036099 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 03:38:11.036108 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 03:38:11.036119 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 03:38:11.036127 kernel: Fallback order for Node 0: 0 May 13 03:38:11.036136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 03:38:11.036144 kernel: Policy zone: Normal May 13 03:38:11.036153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 03:38:11.036161 kernel: software IO TLB: area num 2. May 13 03:38:11.036170 kernel: Memory: 3962108K/4193772K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 231404K reserved, 0K cma-reserved) May 13 03:38:11.036179 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 03:38:11.036187 kernel: ftrace: allocating 37993 entries in 149 pages May 13 03:38:11.036198 kernel: ftrace: allocated 149 pages with 4 groups May 13 03:38:11.036207 kernel: Dynamic Preempt: voluntary May 13 03:38:11.036215 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 03:38:11.036224 kernel: rcu: RCU event tracing is enabled. May 13 03:38:11.036247 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 03:38:11.036256 kernel: Trampoline variant of Tasks RCU enabled. May 13 03:38:11.036265 kernel: Rude variant of Tasks RCU enabled. May 13 03:38:11.036273 kernel: Tracing variant of Tasks RCU enabled. May 13 03:38:11.036282 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 03:38:11.036293 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 03:38:11.036301 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 03:38:11.036310 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 03:38:11.036318 kernel: Console: colour VGA+ 80x25 May 13 03:38:11.036327 kernel: printk: console [tty0] enabled May 13 03:38:11.036335 kernel: printk: console [ttyS0] enabled May 13 03:38:11.036344 kernel: ACPI: Core revision 20230628 May 13 03:38:11.036353 kernel: APIC: Switch to symmetric I/O mode setup May 13 03:38:11.036361 kernel: x2apic enabled May 13 03:38:11.036372 kernel: APIC: Switched APIC routing to: physical x2apic May 13 03:38:11.036380 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 03:38:11.036389 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 03:38:11.036397 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 03:38:11.036406 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 03:38:11.036415 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 03:38:11.036423 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 03:38:11.036432 kernel: Spectre V2 : Mitigation: Retpolines May 13 03:38:11.036440 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 03:38:11.036451 kernel: Speculative Store Bypass: Vulnerable May 13 03:38:11.036460 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 03:38:11.036468 kernel: Freeing SMP alternatives memory: 32K May 13 03:38:11.036477 kernel: pid_max: default: 32768 minimum: 301 May 13 03:38:11.036494 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 03:38:11.036504 kernel: landlock: Up and running. May 13 03:38:11.036513 kernel: SELinux: Initializing. May 13 03:38:11.036522 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 03:38:11.036531 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 03:38:11.036540 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 03:38:11.036549 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 03:38:11.036559 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 03:38:11.036571 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 03:38:11.036580 kernel: Performance Events: AMD PMU driver. May 13 03:38:11.036589 kernel: ... version: 0 May 13 03:38:11.036598 kernel: ... bit width: 48 May 13 03:38:11.036606 kernel: ... generic registers: 4 May 13 03:38:11.036617 kernel: ... value mask: 0000ffffffffffff May 13 03:38:11.036626 kernel: ... max period: 00007fffffffffff May 13 03:38:11.036635 kernel: ... fixed-purpose events: 0 May 13 03:38:11.036644 kernel: ... event mask: 000000000000000f May 13 03:38:11.036653 kernel: signal: max sigframe size: 1440 May 13 03:38:11.036662 kernel: rcu: Hierarchical SRCU implementation. May 13 03:38:11.036671 kernel: rcu: Max phase no-delay instances is 400. May 13 03:38:11.036680 kernel: smp: Bringing up secondary CPUs ... May 13 03:38:11.036689 kernel: smpboot: x86: Booting SMP configuration: May 13 03:38:11.036700 kernel: .... node #0, CPUs: #1 May 13 03:38:11.036709 kernel: smp: Brought up 1 node, 2 CPUs May 13 03:38:11.036718 kernel: smpboot: Max logical packages: 2 May 13 03:38:11.036727 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 03:38:11.036736 kernel: devtmpfs: initialized May 13 03:38:11.036745 kernel: x86/mm: Memory block size: 128MB May 13 03:38:11.036754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 03:38:11.036763 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 03:38:11.036772 kernel: pinctrl core: initialized pinctrl subsystem May 13 03:38:11.036783 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 03:38:11.036792 kernel: audit: initializing netlink subsys (disabled) May 13 03:38:11.036801 kernel: audit: type=2000 audit(1747107489.844:1): state=initialized audit_enabled=0 res=1 May 13 03:38:11.036810 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 03:38:11.036819 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 03:38:11.036828 kernel: cpuidle: using governor menu May 13 03:38:11.036837 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 03:38:11.036846 kernel: dca service started, version 1.12.1 May 13 03:38:11.036855 kernel: PCI: Using configuration type 1 for base access May 13 03:38:11.036866 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 03:38:11.036875 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 03:38:11.036884 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 03:38:11.036893 kernel: ACPI: Added _OSI(Module Device) May 13 03:38:11.036902 kernel: ACPI: Added _OSI(Processor Device) May 13 03:38:11.036911 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 03:38:11.036920 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 03:38:11.036929 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 03:38:11.036938 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 03:38:11.036948 kernel: ACPI: Interpreter enabled May 13 03:38:11.036957 kernel: ACPI: PM: (supports S0 S3 S5) May 13 03:38:11.036966 kernel: ACPI: Using IOAPIC for interrupt routing May 13 03:38:11.036975 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 03:38:11.036984 kernel: PCI: Using E820 reservations for host bridge windows May 13 03:38:11.036993 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 03:38:11.037002 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 03:38:11.037152 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 03:38:11.039331 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 03:38:11.039432 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 03:38:11.039447 kernel: acpiphp: Slot [3] registered May 13 03:38:11.039457 kernel: acpiphp: Slot [4] registered May 13 03:38:11.039466 kernel: acpiphp: Slot [5] registered May 13 03:38:11.039475 kernel: acpiphp: Slot [6] registered May 13 03:38:11.039484 kernel: acpiphp: Slot [7] registered May 13 03:38:11.039493 kernel: acpiphp: Slot [8] registered May 13 03:38:11.039506 kernel: acpiphp: Slot [9] registered May 13 03:38:11.039515 kernel: acpiphp: Slot [10] registered May 13 03:38:11.039523 kernel: acpiphp: Slot [11] registered May 13 03:38:11.039532 kernel: acpiphp: Slot [12] registered May 13 03:38:11.039541 kernel: acpiphp: Slot [13] registered May 13 03:38:11.039550 kernel: acpiphp: Slot [14] registered May 13 03:38:11.039559 kernel: acpiphp: Slot [15] registered May 13 03:38:11.039568 kernel: acpiphp: Slot [16] registered May 13 03:38:11.039576 kernel: acpiphp: Slot [17] registered May 13 03:38:11.039585 kernel: acpiphp: Slot [18] registered May 13 03:38:11.039596 kernel: acpiphp: Slot [19] registered May 13 03:38:11.039619 kernel: acpiphp: Slot [20] registered May 13 03:38:11.039628 kernel: acpiphp: Slot [21] registered May 13 03:38:11.039637 kernel: acpiphp: Slot [22] registered May 13 03:38:11.039646 kernel: acpiphp: Slot [23] registered May 13 03:38:11.039655 kernel: acpiphp: Slot [24] registered May 13 03:38:11.039664 kernel: acpiphp: Slot [25] registered May 13 03:38:11.039672 kernel: acpiphp: Slot [26] registered May 13 03:38:11.039681 kernel: acpiphp: Slot [27] registered May 13 03:38:11.039692 kernel: acpiphp: Slot [28] registered May 13 03:38:11.039700 kernel: acpiphp: Slot [29] registered May 13 03:38:11.039709 kernel: acpiphp: Slot [30] registered May 13 03:38:11.039718 kernel: acpiphp: Slot [31] registered May 13 03:38:11.039727 kernel: PCI host bridge to bus 0000:00 May 13 03:38:11.039828 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 03:38:11.039914 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 03:38:11.039997 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 03:38:11.040085 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 03:38:11.040168 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 03:38:11.040276 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 03:38:11.040394 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 03:38:11.040504 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 03:38:11.040607 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 03:38:11.040707 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 03:38:11.040799 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 03:38:11.040890 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 03:38:11.040983 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 03:38:11.041075 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 03:38:11.041177 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 03:38:11.043094 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 03:38:11.043210 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 03:38:11.043382 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 03:38:11.043491 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 03:38:11.043591 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 03:38:11.043739 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 03:38:11.043896 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 03:38:11.043994 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 03:38:11.044107 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 03:38:11.044203 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 03:38:11.046326 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 03:38:11.046422 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 03:38:11.046515 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 03:38:11.046618 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 03:38:11.046713 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 03:38:11.046812 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 03:38:11.046905 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 03:38:11.047007 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 03:38:11.047103 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 03:38:11.047197 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 03:38:11.047331 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 03:38:11.047427 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 03:38:11.047526 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 03:38:11.047638 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 03:38:11.047652 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 03:38:11.047662 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 03:38:11.047671 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 03:38:11.047680 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 03:38:11.047689 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 03:38:11.047699 kernel: iommu: Default domain type: Translated May 13 03:38:11.047711 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 03:38:11.047721 kernel: PCI: Using ACPI for IRQ routing May 13 03:38:11.047730 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 03:38:11.047740 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 03:38:11.047749 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 03:38:11.047842 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 03:38:11.047937 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 03:38:11.048031 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 03:38:11.048044 kernel: vgaarb: loaded May 13 03:38:11.048057 kernel: clocksource: Switched to clocksource kvm-clock May 13 03:38:11.048066 kernel: VFS: Disk quotas dquot_6.6.0 May 13 03:38:11.048075 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 03:38:11.048085 kernel: pnp: PnP ACPI init May 13 03:38:11.048190 kernel: pnp 00:03: [dma 2] May 13 03:38:11.048206 kernel: pnp: PnP ACPI: found 5 devices May 13 03:38:11.048216 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 03:38:11.048225 kernel: NET: Registered PF_INET protocol family May 13 03:38:11.048261 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 03:38:11.050252 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 03:38:11.050262 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 03:38:11.050278 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 03:38:11.050288 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 03:38:11.050297 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 03:38:11.050306 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 03:38:11.050315 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 03:38:11.050325 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 03:38:11.050339 kernel: NET: Registered PF_XDP protocol family May 13 03:38:11.050436 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 03:38:11.050519 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 03:38:11.050602 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 03:38:11.050684 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 03:38:11.050766 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 03:38:11.050864 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 03:38:11.050961 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 03:38:11.050979 kernel: PCI: CLS 0 bytes, default 64 May 13 03:38:11.050989 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 03:38:11.050999 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 03:38:11.051009 kernel: Initialise system trusted keyrings May 13 03:38:11.051018 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 03:38:11.051027 kernel: Key type asymmetric registered May 13 03:38:11.051036 kernel: Asymmetric key parser 'x509' registered May 13 03:38:11.051046 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 03:38:11.051055 kernel: io scheduler mq-deadline registered May 13 03:38:11.051067 kernel: io scheduler kyber registered May 13 03:38:11.051076 kernel: io scheduler bfq registered May 13 03:38:11.051085 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 03:38:11.051095 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 03:38:11.051104 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 03:38:11.051114 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 03:38:11.051123 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 03:38:11.051133 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 03:38:11.051142 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 03:38:11.051153 kernel: random: crng init done May 13 03:38:11.051162 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 03:38:11.051171 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 03:38:11.051180 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 03:38:11.051304 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 03:38:11.051320 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 03:38:11.051404 kernel: rtc_cmos 00:04: registered as rtc0 May 13 03:38:11.051491 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T03:38:10 UTC (1747107490) May 13 03:38:11.051583 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 03:38:11.051596 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 03:38:11.051620 kernel: NET: Registered PF_INET6 protocol family May 13 03:38:11.051630 kernel: Segment Routing with IPv6 May 13 03:38:11.051639 kernel: In-situ OAM (IOAM) with IPv6 May 13 03:38:11.051648 kernel: NET: Registered PF_PACKET protocol family May 13 03:38:11.051657 kernel: Key type dns_resolver registered May 13 03:38:11.051666 kernel: IPI shorthand broadcast: enabled May 13 03:38:11.051676 kernel: sched_clock: Marking stable (1028008257, 171153771)->(1239085545, -39923517) May 13 03:38:11.051691 kernel: registered taskstats version 1 May 13 03:38:11.051700 kernel: Loading compiled-in X.509 certificates May 13 03:38:11.051709 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 72bf95fdb9aed340290dd5f38e76c1ea0e6f32b4' May 13 03:38:11.051718 kernel: Key type .fscrypt registered May 13 03:38:11.051727 kernel: Key type fscrypt-provisioning registered May 13 03:38:11.051736 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 03:38:11.051745 kernel: ima: Allocated hash algorithm: sha1 May 13 03:38:11.051754 kernel: ima: No architecture policies found May 13 03:38:11.051766 kernel: clk: Disabling unused clocks May 13 03:38:11.051775 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 03:38:11.051785 kernel: Write protecting the kernel read-only data: 40960k May 13 03:38:11.051794 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 03:38:11.051803 kernel: Run /init as init process May 13 03:38:11.051812 kernel: with arguments: May 13 03:38:11.051821 kernel: /init May 13 03:38:11.051830 kernel: with environment: May 13 03:38:11.051839 kernel: HOME=/ May 13 03:38:11.051848 kernel: TERM=linux May 13 03:38:11.051859 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 03:38:11.051869 systemd[1]: Successfully made /usr/ read-only. May 13 03:38:11.051882 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 03:38:11.051893 systemd[1]: Detected virtualization kvm. May 13 03:38:11.051903 systemd[1]: Detected architecture x86-64. May 13 03:38:11.051912 systemd[1]: Running in initrd. May 13 03:38:11.051924 systemd[1]: No hostname configured, using default hostname. May 13 03:38:11.051934 systemd[1]: Hostname set to . May 13 03:38:11.051944 systemd[1]: Initializing machine ID from VM UUID. May 13 03:38:11.051954 systemd[1]: Queued start job for default target initrd.target. May 13 03:38:11.051963 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 03:38:11.051973 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 03:38:11.051984 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 03:38:11.052003 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 03:38:11.052015 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 03:38:11.052026 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 03:38:11.052037 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 03:38:11.052048 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 03:38:11.052058 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 03:38:11.052070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 03:38:11.052080 systemd[1]: Reached target paths.target - Path Units. May 13 03:38:11.052090 systemd[1]: Reached target slices.target - Slice Units. May 13 03:38:11.052100 systemd[1]: Reached target swap.target - Swaps. May 13 03:38:11.052110 systemd[1]: Reached target timers.target - Timer Units. May 13 03:38:11.052120 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 03:38:11.052130 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 03:38:11.052140 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 03:38:11.052150 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 03:38:11.052162 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 03:38:11.052172 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 03:38:11.052183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 03:38:11.052193 systemd[1]: Reached target sockets.target - Socket Units. May 13 03:38:11.052203 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 03:38:11.052213 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 03:38:11.052223 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 03:38:11.053297 systemd[1]: Starting systemd-fsck-usr.service... May 13 03:38:11.053312 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 03:38:11.053322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 03:38:11.053332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 03:38:11.053368 systemd-journald[184]: Collecting audit messages is disabled. May 13 03:38:11.053397 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 03:38:11.053408 systemd-journald[184]: Journal started May 13 03:38:11.053431 systemd-journald[184]: Runtime Journal (/run/log/journal/3b837c81c12844ffb02d4a2de1d6077f) is 8M, max 78.2M, 70.2M free. May 13 03:38:11.049088 systemd-modules-load[186]: Inserted module 'overlay' May 13 03:38:11.091577 systemd[1]: Started systemd-journald.service - Journal Service. May 13 03:38:11.096145 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 03:38:11.095601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 03:38:11.098875 kernel: Bridge firewalling registered May 13 03:38:11.097008 systemd-modules-load[186]: Inserted module 'br_netfilter' May 13 03:38:11.097465 systemd[1]: Finished systemd-fsck-usr.service. May 13 03:38:11.099394 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 03:38:11.100303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 03:38:11.104420 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 03:38:11.107340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 03:38:11.113001 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 03:38:11.117465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 03:38:11.126748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 03:38:11.128288 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 03:38:11.133407 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 03:38:11.135282 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 03:38:11.146812 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 03:38:11.149087 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 03:38:11.154370 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 03:38:11.155136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 03:38:11.175253 dracut-cmdline[218]: dracut-dracut-053 May 13 03:38:11.175253 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 13 03:38:11.197406 systemd-resolved[219]: Positive Trust Anchors: May 13 03:38:11.198100 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 03:38:11.198673 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 03:38:11.204260 systemd-resolved[219]: Defaulting to hostname 'linux'. May 13 03:38:11.205139 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 03:38:11.205702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 03:38:11.248299 kernel: SCSI subsystem initialized May 13 03:38:11.258300 kernel: Loading iSCSI transport class v2.0-870. May 13 03:38:11.271477 kernel: iscsi: registered transport (tcp) May 13 03:38:11.300728 kernel: iscsi: registered transport (qla4xxx) May 13 03:38:11.300799 kernel: QLogic iSCSI HBA Driver May 13 03:38:11.369979 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 03:38:11.375507 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 03:38:11.437373 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 03:38:11.437479 kernel: device-mapper: uevent: version 1.0.3 May 13 03:38:11.437511 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 03:38:11.500397 kernel: raid6: sse2x4 gen() 5112 MB/s May 13 03:38:11.519287 kernel: raid6: sse2x2 gen() 5822 MB/s May 13 03:38:11.537877 kernel: raid6: sse2x1 gen() 9225 MB/s May 13 03:38:11.537949 kernel: raid6: using algorithm sse2x1 gen() 9225 MB/s May 13 03:38:11.556650 kernel: raid6: .... xor() 7267 MB/s, rmw enabled May 13 03:38:11.556713 kernel: raid6: using ssse3x2 recovery algorithm May 13 03:38:11.579787 kernel: xor: measuring software checksum speed May 13 03:38:11.579878 kernel: prefetch64-sse : 18526 MB/sec May 13 03:38:11.580337 kernel: generic_sse : 11949 MB/sec May 13 03:38:11.581472 kernel: xor: using function: prefetch64-sse (18526 MB/sec) May 13 03:38:11.765312 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 03:38:11.782614 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 03:38:11.788513 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 03:38:11.816177 systemd-udevd[404]: Using default interface naming scheme 'v255'. May 13 03:38:11.821135 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 03:38:11.828455 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 03:38:11.861944 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation May 13 03:38:11.907359 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 03:38:11.912735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 03:38:11.972294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 03:38:11.978682 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 03:38:12.013863 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 03:38:12.033227 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 03:38:12.035712 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 03:38:12.036769 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 03:38:12.040874 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 03:38:12.071256 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 13 03:38:12.071464 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 03:38:12.081792 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 03:38:12.105352 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 03:38:12.105411 kernel: GPT:17805311 != 20971519 May 13 03:38:12.106319 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 03:38:12.108272 kernel: GPT:17805311 != 20971519 May 13 03:38:12.108292 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 03:38:12.110667 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 03:38:12.119546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 03:38:12.119709 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 03:38:12.123854 kernel: libata version 3.00 loaded. May 13 03:38:12.120729 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 03:38:12.121699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 03:38:12.127516 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 03:38:12.122324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 03:38:12.131102 kernel: scsi host0: ata_piix May 13 03:38:12.131258 kernel: scsi host1: ata_piix May 13 03:38:12.125936 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 03:38:12.132783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 03:38:12.138277 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 03:38:12.138294 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 03:38:12.138305 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 03:38:12.167287 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (475) May 13 03:38:12.173239 kernel: BTRFS: device fsid d5ab0fb8-9c4f-4805-8fe7-b120550325cd devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (463) May 13 03:38:12.199481 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 03:38:12.220410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 03:38:12.247287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 03:38:12.262718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 03:38:12.272479 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 03:38:12.273125 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 03:38:12.276418 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 03:38:12.280348 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 03:38:12.294453 disk-uuid[505]: Primary Header is updated. May 13 03:38:12.294453 disk-uuid[505]: Secondary Entries is updated. May 13 03:38:12.294453 disk-uuid[505]: Secondary Header is updated. May 13 03:38:12.299416 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 03:38:12.305376 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 03:38:13.326406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 03:38:13.327951 disk-uuid[513]: The operation has completed successfully. May 13 03:38:13.413802 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 03:38:13.413909 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 03:38:13.455710 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 03:38:13.475783 sh[525]: Success May 13 03:38:13.510288 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 03:38:13.600076 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 03:38:13.608407 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 03:38:13.615031 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 03:38:13.637605 kernel: BTRFS info (device dm-0): first mount of filesystem d5ab0fb8-9c4f-4805-8fe7-b120550325cd May 13 03:38:13.637681 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 03:38:13.639579 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 03:38:13.641669 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 03:38:13.643289 kernel: BTRFS info (device dm-0): using free space tree May 13 03:38:13.659860 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 03:38:13.661627 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 03:38:13.665213 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 03:38:13.670059 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 03:38:13.722335 kernel: BTRFS info (device vda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 03:38:13.730589 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 03:38:13.730658 kernel: BTRFS info (device vda6): using free space tree May 13 03:38:13.742343 kernel: BTRFS info (device vda6): auto enabling async discard May 13 03:38:13.755279 kernel: BTRFS info (device vda6): last unmount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 03:38:13.766020 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 03:38:13.769382 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 03:38:13.855964 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 03:38:13.860352 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 03:38:13.894350 systemd-networkd[706]: lo: Link UP May 13 03:38:13.894360 systemd-networkd[706]: lo: Gained carrier May 13 03:38:13.895779 systemd-networkd[706]: Enumeration completed May 13 03:38:13.895849 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 03:38:13.896871 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 03:38:13.896876 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 03:38:13.897767 systemd[1]: Reached target network.target - Network. May 13 03:38:13.899032 systemd-networkd[706]: eth0: Link UP May 13 03:38:13.899035 systemd-networkd[706]: eth0: Gained carrier May 13 03:38:13.899043 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 03:38:13.915067 systemd-networkd[706]: eth0: DHCPv4 address 172.24.4.174/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 03:38:13.925154 ignition[646]: Ignition 2.20.0 May 13 03:38:13.925173 ignition[646]: Stage: fetch-offline May 13 03:38:13.926611 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 03:38:13.925209 ignition[646]: no configs at "/usr/lib/ignition/base.d" May 13 03:38:13.925221 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 03:38:13.925355 ignition[646]: parsed url from cmdline: "" May 13 03:38:13.925361 ignition[646]: no config URL provided May 13 03:38:13.925367 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" May 13 03:38:13.925378 ignition[646]: no config at "/usr/lib/ignition/user.ign" May 13 03:38:13.931439 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 03:38:13.925383 ignition[646]: failed to fetch config: resource requires networking May 13 03:38:13.925580 ignition[646]: Ignition finished successfully May 13 03:38:13.953707 ignition[718]: Ignition 2.20.0 May 13 03:38:13.953723 ignition[718]: Stage: fetch May 13 03:38:13.953908 ignition[718]: no configs at "/usr/lib/ignition/base.d" May 13 03:38:13.953922 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 03:38:13.954029 ignition[718]: parsed url from cmdline: "" May 13 03:38:13.954033 ignition[718]: no config URL provided May 13 03:38:13.954040 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" May 13 03:38:13.954049 ignition[718]: no config at "/usr/lib/ignition/user.ign" May 13 03:38:13.954180 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 03:38:13.954201 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 03:38:13.954255 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 03:38:14.158892 ignition[718]: GET result: OK May 13 03:38:14.159072 ignition[718]: parsing config with SHA512: 5c7cb210629ecae86b056a3d3a584923eb18c73c8af86ca4b30e4b9c704c1b565672c1b7fa84879bd23927c32ab8a2ecfdb4024718f4aaa44f39c9f84978abc4 May 13 03:38:14.174722 unknown[718]: fetched base config from "system" May 13 03:38:14.174753 unknown[718]: fetched base config from "system" May 13 03:38:14.175853 ignition[718]: fetch: fetch complete May 13 03:38:14.174768 unknown[718]: fetched user config from "openstack" May 13 03:38:14.175867 ignition[718]: fetch: fetch passed May 13 03:38:14.179201 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 03:38:14.175960 ignition[718]: Ignition finished successfully May 13 03:38:14.185504 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 03:38:14.232028 ignition[724]: Ignition 2.20.0 May 13 03:38:14.232058 ignition[724]: Stage: kargs May 13 03:38:14.232549 ignition[724]: no configs at "/usr/lib/ignition/base.d" May 13 03:38:14.232577 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 03:38:14.237697 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 03:38:14.234972 ignition[724]: kargs: kargs passed May 13 03:38:14.235078 ignition[724]: Ignition finished successfully May 13 03:38:14.243563 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 03:38:14.290097 ignition[730]: Ignition 2.20.0 May 13 03:38:14.291853 ignition[730]: Stage: disks May 13 03:38:14.292329 ignition[730]: no configs at "/usr/lib/ignition/base.d" May 13 03:38:14.292357 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 03:38:14.299120 ignition[730]: disks: disks passed May 13 03:38:14.300435 ignition[730]: Ignition finished successfully May 13 03:38:14.302471 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 03:38:14.304789 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 03:38:14.306706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 03:38:14.309722 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 03:38:14.312681 systemd[1]: Reached target sysinit.target - System Initialization. May 13 03:38:14.315181 systemd[1]: Reached target basic.target - Basic System. May 13 03:38:14.320016 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 03:38:14.369351 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 13 03:38:14.383790 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 03:38:14.387895 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 03:38:14.553271 kernel: EXT4-fs (vda9): mounted filesystem c9958eea-1ed5-48cc-be53-8e1c8ef051da r/w with ordered data mode. Quota mode: none. May 13 03:38:14.554017 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 03:38:14.555634 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 03:38:14.558903 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 03:38:14.562297 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 03:38:14.563489 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 03:38:14.565000 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 13 03:38:14.567093 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 03:38:14.567123 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 03:38:14.580689 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 03:38:14.593505 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 03:38:14.615508 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) May 13 03:38:14.615590 kernel: BTRFS info (device vda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 03:38:14.615674 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 03:38:14.615719 kernel: BTRFS info (device vda6): using free space tree May 13 03:38:14.628422 kernel: BTRFS info (device vda6): auto enabling async discard May 13 03:38:14.631388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 03:38:14.695140 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory May 13 03:38:14.702066 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory May 13 03:38:14.709424 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory May 13 03:38:14.715863 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory May 13 03:38:14.822497 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 03:38:14.826626 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 03:38:14.829433 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 03:38:14.844430 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 03:38:14.846406 kernel: BTRFS info (device vda6): last unmount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 03:38:14.878796 ignition[863]: INFO : Ignition 2.20.0 May 13 03:38:14.879623 ignition[863]: INFO : Stage: mount May 13 03:38:14.880356 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 03:38:14.882074 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 03:38:14.882074 ignition[863]: INFO : mount: mount passed May 13 03:38:14.882074 ignition[863]: INFO : Ignition finished successfully May 13 03:38:14.883105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 03:38:14.885777 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 03:38:15.613647 systemd-networkd[706]: eth0: Gained IPv6LL May 13 03:38:21.739152 coreos-metadata[749]: May 13 03:38:21.739 WARN failed to locate config-drive, using the metadata service API instead May 13 03:38:21.788292 coreos-metadata[749]: May 13 03:38:21.788 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 03:38:21.803132 coreos-metadata[749]: May 13 03:38:21.803 INFO Fetch successful May 13 03:38:21.804610 coreos-metadata[749]: May 13 03:38:21.803 INFO wrote hostname ci-4284-0-0-n-62b177a255.novalocal to /sysroot/etc/hostname May 13 03:38:21.809454 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 03:38:21.809770 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 13 03:38:21.817454 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 03:38:21.854945 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 03:38:21.895299 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (881) May 13 03:38:21.906299 kernel: BTRFS info (device vda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 03:38:21.906407 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 03:38:21.906438 kernel: BTRFS info (device vda6): using free space tree May 13 03:38:21.918357 kernel: BTRFS info (device vda6): auto enabling async discard May 13 03:38:21.923107 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 03:38:21.972221 ignition[899]: INFO : Ignition 2.20.0 May 13 03:38:21.975077 ignition[899]: INFO : Stage: files May 13 03:38:21.975077 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 03:38:21.975077 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 03:38:21.981225 ignition[899]: DEBUG : files: compiled without relabeling support, skipping May 13 03:38:21.983413 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 03:38:21.983413 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 03:38:21.990162 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 03:38:21.992741 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 03:38:21.994699 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 03:38:21.994148 unknown[899]: wrote ssh authorized keys file for user: core May 13 03:38:22.000027 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 03:38:22.000027 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 03:38:22.080034 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 03:38:23.004607 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 03:38:23.004607 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 03:38:23.004607 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 03:38:23.780435 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 03:38:24.337684 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 03:38:24.337684 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 03:38:24.342602 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 03:38:24.811171 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 03:38:26.334730 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 03:38:26.334730 ignition[899]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 03:38:26.338300 ignition[899]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 03:38:26.338300 ignition[899]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 03:38:26.338300 ignition[899]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 03:38:26.338300 ignition[899]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 13 03:38:26.338300 ignition[899]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 13 03:38:26.338300 ignition[899]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 03:38:26.338300 ignition[899]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 03:38:26.338300 ignition[899]: INFO : files: files passed May 13 03:38:26.338300 ignition[899]: INFO : Ignition finished successfully May 13 03:38:26.338895 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 03:38:26.349058 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 03:38:26.353518 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 03:38:26.364504 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 03:38:26.365214 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 03:38:26.379313 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 03:38:26.379313 initrd-setup-root-after-ignition[929]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 03:38:26.380982 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 03:38:26.383390 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 03:38:26.386169 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 03:38:26.390432 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 03:38:26.451008 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 03:38:26.451218 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 03:38:26.456078 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 03:38:26.457461 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 03:38:26.459427 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 03:38:26.461192 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 03:38:26.484046 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 03:38:26.490518 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 03:38:26.516907 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 03:38:26.520198 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 03:38:26.521998 systemd[1]: Stopped target timers.target - Timer Units. May 13 03:38:26.524489 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 03:38:26.524791 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 03:38:26.527699 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 03:38:26.530519 systemd[1]: Stopped target basic.target - Basic System. May 13 03:38:26.532670 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 03:38:26.534841 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 03:38:26.537276 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 03:38:26.539856 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 03:38:26.542339 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 03:38:26.544924 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 03:38:26.547094 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 03:38:26.548800 systemd[1]: Stopped target swap.target - Swaps. May 13 03:38:26.550281 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 03:38:26.550462 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 03:38:26.552263 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 03:38:26.553250 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 03:38:26.555041 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 03:38:26.555152 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 03:38:26.556800 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 03:38:26.556919 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 03:38:26.559162 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 03:38:26.559328 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 03:38:26.560315 systemd[1]: ignition-files.service: Deactivated successfully. May 13 03:38:26.560455 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 03:38:26.564434 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 03:38:26.567557 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 03:38:26.569829 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 03:38:26.570004 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 03:38:26.573388 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 03:38:26.573520 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 03:38:26.583476 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 03:38:26.583597 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 03:38:26.592974 ignition[953]: INFO : Ignition 2.20.0 May 13 03:38:26.592974 ignition[953]: INFO : Stage: umount May 13 03:38:26.595329 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 03:38:26.595329 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 03:38:26.598514 ignition[953]: INFO : umount: umount passed May 13 03:38:26.598514 ignition[953]: INFO : Ignition finished successfully May 13 03:38:26.599062 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 03:38:26.599174 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 03:38:26.599929 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 03:38:26.599979 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 03:38:26.601090 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 03:38:26.601138 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 03:38:26.601674 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 03:38:26.601715 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 03:38:26.602226 systemd[1]: Stopped target network.target - Network. May 13 03:38:26.602676 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 03:38:26.602719 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 03:38:26.603334 systemd[1]: Stopped target paths.target - Path Units. May 13 03:38:26.604249 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 03:38:26.608291 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 03:38:26.609300 systemd[1]: Stopped target slices.target - Slice Units. May 13 03:38:26.610393 systemd[1]: Stopped target sockets.target - Socket Units. May 13 03:38:26.611527 systemd[1]: iscsid.socket: Deactivated successfully. May 13 03:38:26.611578 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 03:38:26.613052 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 03:38:26.613085 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 03:38:26.615034 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 03:38:26.615078 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 03:38:26.617433 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 03:38:26.617475 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 03:38:26.618665 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 03:38:26.619831 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 03:38:26.624956 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 03:38:26.626032 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 03:38:26.626127 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 03:38:26.629486 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 03:38:26.629710 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 03:38:26.629798 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 03:38:26.631501 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 03:38:26.631604 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 03:38:26.633881 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 03:38:26.635157 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 03:38:26.635364 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 03:38:26.636615 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 03:38:26.636671 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 03:38:26.642313 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 03:38:26.643570 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 03:38:26.643655 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 03:38:26.644209 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 03:38:26.644272 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 03:38:26.645810 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 03:38:26.645865 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 03:38:26.647406 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 03:38:26.647451 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 03:38:26.648950 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 03:38:26.652005 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 03:38:26.652078 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 03:38:26.662547 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 03:38:26.662687 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 03:38:26.663840 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 03:38:26.663892 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 03:38:26.664790 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 03:38:26.664823 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 03:38:26.666043 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 03:38:26.666086 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 03:38:26.667843 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 03:38:26.667885 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 03:38:26.669028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 03:38:26.669073 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 03:38:26.671338 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 03:38:26.673540 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 03:38:26.673592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 03:38:26.674936 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 03:38:26.674980 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 03:38:26.676704 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 03:38:26.676770 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 03:38:26.677368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 03:38:26.677419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 03:38:26.679970 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 03:38:26.680030 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 03:38:26.682398 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 03:38:26.682496 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 03:38:26.687895 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 03:38:26.688008 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 03:38:26.689022 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 03:38:26.691390 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 03:38:26.707412 systemd[1]: Switching root. May 13 03:38:26.736932 systemd-journald[184]: Journal stopped May 13 03:38:28.662294 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 13 03:38:28.662364 kernel: SELinux: policy capability network_peer_controls=1 May 13 03:38:28.662383 kernel: SELinux: policy capability open_perms=1 May 13 03:38:28.662396 kernel: SELinux: policy capability extended_socket_class=1 May 13 03:38:28.662408 kernel: SELinux: policy capability always_check_network=0 May 13 03:38:28.662421 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 03:38:28.662441 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 03:38:28.662453 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 03:38:28.662466 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 03:38:28.662478 kernel: audit: type=1403 audit(1747107507.557:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 03:38:28.662497 systemd[1]: Successfully loaded SELinux policy in 72.659ms. May 13 03:38:28.662524 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.807ms. May 13 03:38:28.662540 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 03:38:28.662555 systemd[1]: Detected virtualization kvm. May 13 03:38:28.662570 systemd[1]: Detected architecture x86-64. May 13 03:38:28.662588 systemd[1]: Detected first boot. May 13 03:38:28.662602 systemd[1]: Hostname set to . May 13 03:38:28.662616 systemd[1]: Initializing machine ID from VM UUID. May 13 03:38:28.662629 zram_generator::config[998]: No configuration found. May 13 03:38:28.662644 kernel: Guest personality initialized and is inactive May 13 03:38:28.662661 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 03:38:28.662674 kernel: Initialized host personality May 13 03:38:28.662687 kernel: NET: Registered PF_VSOCK protocol family May 13 03:38:28.662702 systemd[1]: Populated /etc with preset unit settings. May 13 03:38:28.662717 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 03:38:28.662731 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 03:38:28.662746 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 03:38:28.662761 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 03:38:28.662776 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 03:38:28.662792 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 03:38:28.662806 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 03:38:28.662821 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 03:38:28.662838 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 03:38:28.662853 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 03:38:28.662869 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 03:38:28.662884 systemd[1]: Created slice user.slice - User and Session Slice. May 13 03:38:28.662900 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 03:38:28.662915 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 03:38:28.662930 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 03:38:28.662945 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 03:38:28.662963 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 03:38:28.662978 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 03:38:28.662993 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 03:38:28.663008 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 03:38:28.663022 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 03:38:28.663037 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 03:38:28.663051 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 03:38:28.663068 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 03:38:28.663083 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 03:38:28.663098 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 03:38:28.663112 systemd[1]: Reached target slices.target - Slice Units. May 13 03:38:28.663127 systemd[1]: Reached target swap.target - Swaps. May 13 03:38:28.663141 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 03:38:28.663156 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 03:38:28.663171 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 03:38:28.663186 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 03:38:28.663203 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 03:38:28.663219 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 03:38:28.663634 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 03:38:28.663654 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 03:38:28.663669 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 03:38:28.663688 systemd[1]: Mounting media.mount - External Media Directory... May 13 03:38:28.663715 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 03:38:28.663734 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 03:38:28.663753 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 03:38:28.663780 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 03:38:28.663796 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 03:38:28.663809 systemd[1]: Reached target machines.target - Containers. May 13 03:38:28.663823 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 03:38:28.663836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 03:38:28.663849 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 03:38:28.663862 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 03:38:28.663875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 03:38:28.663891 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 03:38:28.663904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 03:38:28.663917 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 03:38:28.663931 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 03:38:28.663945 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 03:38:28.663962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 03:38:28.663975 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 03:38:28.663988 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 03:38:28.664003 systemd[1]: Stopped systemd-fsck-usr.service. May 13 03:38:28.664018 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 03:38:28.664032 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 03:38:28.664044 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 03:38:28.664057 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 03:38:28.664071 kernel: fuse: init (API version 7.39) May 13 03:38:28.664083 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 03:38:28.664096 kernel: loop: module loaded May 13 03:38:28.664109 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 03:38:28.664125 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 03:38:28.664138 systemd[1]: verity-setup.service: Deactivated successfully. May 13 03:38:28.664151 systemd[1]: Stopped verity-setup.service. May 13 03:38:28.664165 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 03:38:28.664182 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 03:38:28.664217 systemd-journald[1096]: Collecting audit messages is disabled. May 13 03:38:28.664272 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 03:38:28.664288 systemd-journald[1096]: Journal started May 13 03:38:28.664318 systemd-journald[1096]: Runtime Journal (/run/log/journal/3b837c81c12844ffb02d4a2de1d6077f) is 8M, max 78.2M, 70.2M free. May 13 03:38:28.321799 systemd[1]: Queued start job for default target multi-user.target. May 13 03:38:28.333533 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 03:38:28.333970 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 03:38:28.667286 systemd[1]: Started systemd-journald.service - Journal Service. May 13 03:38:28.669403 systemd[1]: Mounted media.mount - External Media Directory. May 13 03:38:28.669997 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 03:38:28.676300 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 03:38:28.676917 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 03:38:28.677658 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 03:38:28.678421 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 03:38:28.678585 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 03:38:28.679350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 03:38:28.679507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 03:38:28.680256 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 03:38:28.680410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 03:38:28.681126 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 03:38:28.681496 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 03:38:28.682197 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 03:38:28.682364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 03:38:28.683098 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 03:38:28.684395 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 03:38:28.697738 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 03:38:28.702372 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 03:38:28.708803 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 03:38:28.711998 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 03:38:28.714388 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 03:38:28.717380 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 03:38:28.721959 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 03:38:28.724273 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 03:38:28.725021 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 03:38:28.725616 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 03:38:28.728652 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 03:38:28.728697 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 03:38:28.733585 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 03:38:28.737053 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 03:38:28.740840 kernel: ACPI: bus type drm_connector registered May 13 03:38:28.740353 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 03:38:28.742172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 03:38:28.746378 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 03:38:28.750682 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 03:38:28.751363 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 03:38:28.756580 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 03:38:28.762757 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 03:38:28.770739 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 03:38:28.770956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 03:38:28.772932 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 03:38:28.775134 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 03:38:28.797901 systemd-journald[1096]: Time spent on flushing to /var/log/journal/3b837c81c12844ffb02d4a2de1d6077f is 69.271ms for 963 entries. May 13 03:38:28.797901 systemd-journald[1096]: System Journal (/var/log/journal/3b837c81c12844ffb02d4a2de1d6077f) is 8M, max 584.8M, 576.8M free. May 13 03:38:28.899592 systemd-journald[1096]: Received client request to flush runtime journal. May 13 03:38:28.899639 kernel: loop0: detected capacity change from 0 to 218376 May 13 03:38:28.826194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 03:38:28.831328 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 03:38:28.833778 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 03:38:28.838350 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 03:38:28.859923 systemd-tmpfiles[1129]: ACLs are not supported, ignoring. May 13 03:38:28.859938 systemd-tmpfiles[1129]: ACLs are not supported, ignoring. May 13 03:38:28.872678 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 03:38:28.879505 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 03:38:28.901303 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 03:38:28.908116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 03:38:28.910109 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 03:38:28.938532 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 03:38:28.942285 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 03:38:28.948210 udevadm[1157]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 03:38:28.965930 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 03:38:28.973487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 03:38:28.981267 kernel: loop1: detected capacity change from 0 to 151640 May 13 03:38:28.998136 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. May 13 03:38:28.998157 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. May 13 03:38:29.003494 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 03:38:29.052347 kernel: loop2: detected capacity change from 0 to 109808 May 13 03:38:29.122991 kernel: loop3: detected capacity change from 0 to 8 May 13 03:38:29.147019 kernel: loop4: detected capacity change from 0 to 218376 May 13 03:38:29.211296 kernel: loop5: detected capacity change from 0 to 151640 May 13 03:38:29.259266 kernel: loop6: detected capacity change from 0 to 109808 May 13 03:38:29.290269 kernel: loop7: detected capacity change from 0 to 8 May 13 03:38:29.290554 (sd-merge)[1169]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 13 03:38:29.292148 (sd-merge)[1169]: Merged extensions into '/usr'. May 13 03:38:29.298101 systemd[1]: Reload requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... May 13 03:38:29.298118 systemd[1]: Reloading... May 13 03:38:29.374281 zram_generator::config[1195]: No configuration found. May 13 03:38:29.548188 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 03:38:29.637501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 03:38:29.637774 systemd[1]: Reloading finished in 339 ms. May 13 03:38:29.657218 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 03:38:29.667564 systemd[1]: Starting ensure-sysext.service... May 13 03:38:29.671362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 03:38:29.707090 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... May 13 03:38:29.707109 systemd[1]: Reloading... May 13 03:38:29.719898 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 03:38:29.720404 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 03:38:29.727737 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 03:38:29.728318 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 13 03:38:29.728438 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 13 03:38:29.733623 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 13 03:38:29.733637 systemd-tmpfiles[1253]: Skipping /boot May 13 03:38:29.746179 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 13 03:38:29.746193 systemd-tmpfiles[1253]: Skipping /boot May 13 03:38:29.798570 ldconfig[1135]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 03:38:29.816271 zram_generator::config[1279]: No configuration found. May 13 03:38:29.985399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 03:38:30.079836 systemd[1]: Reloading finished in 372 ms. May 13 03:38:30.090024 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 03:38:30.091155 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 03:38:30.098523 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 03:38:30.111203 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 03:38:30.117372 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 03:38:30.120528 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 03:38:30.128219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 03:38:30.132443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 03:38:30.135700 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 03:38:30.145480 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 03:38:30.145686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 03:38:30.147833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 03:38:30.158893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 03:38:30.167661 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 03:38:30.169397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 03:38:30.169540 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 03:38:30.169672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 03:38:30.174708 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 03:38:30.174908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 03:38:30.175101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 03:38:30.175294 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 03:38:30.190907 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 03:38:30.191654 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 03:38:30.193082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 03:38:30.194355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 03:38:30.203023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 03:38:30.203460 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 03:38:30.208603 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 03:38:30.213549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 03:38:30.216126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 03:38:30.216328 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 03:38:30.216557 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 03:38:30.217712 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 03:38:30.228930 systemd[1]: Finished ensure-sysext.service. May 13 03:38:30.232032 systemd-udevd[1346]: Using default interface naming scheme 'v255'. May 13 03:38:30.236658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 03:38:30.237294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 03:38:30.244638 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 03:38:30.253097 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 03:38:30.255301 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 03:38:30.259580 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 03:38:30.260328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 03:38:30.261470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 03:38:30.263565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 03:38:30.264613 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 03:38:30.264815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 03:38:30.272310 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 03:38:30.276472 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 03:38:30.276559 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 03:38:30.276610 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 03:38:30.278785 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 03:38:30.285470 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 03:38:30.310508 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 03:38:30.318929 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 03:38:30.330868 augenrules[1403]: No rules May 13 03:38:30.333345 systemd[1]: audit-rules.service: Deactivated successfully. May 13 03:38:30.333770 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 03:38:30.454896 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 03:38:30.455756 systemd[1]: Reached target time-set.target - System Time Set. May 13 03:38:30.471891 systemd-networkd[1381]: lo: Link UP May 13 03:38:30.472301 systemd-networkd[1381]: lo: Gained carrier May 13 03:38:30.474495 systemd-networkd[1381]: Enumeration completed May 13 03:38:30.474595 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 03:38:30.478501 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 03:38:30.482496 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 03:38:30.492920 systemd-resolved[1345]: Positive Trust Anchors: May 13 03:38:30.492940 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 03:38:30.492986 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 03:38:30.507337 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1390) May 13 03:38:30.509672 systemd-resolved[1345]: Using system hostname 'ci-4284-0-0-n-62b177a255.novalocal'. May 13 03:38:30.512430 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 03:38:30.513102 systemd[1]: Reached target network.target - Network. May 13 03:38:30.513668 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 03:38:30.529362 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 03:38:30.552758 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 03:38:30.583128 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 03:38:30.583138 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 03:38:30.583778 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 03:38:30.585782 systemd-networkd[1381]: eth0: Link UP May 13 03:38:30.585789 systemd-networkd[1381]: eth0: Gained carrier May 13 03:38:30.585807 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 03:38:30.587000 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 03:38:30.597267 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 03:38:30.600321 systemd-networkd[1381]: eth0: DHCPv4 address 172.24.4.174/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 03:38:30.601789 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. May 13 03:38:30.611473 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 03:38:30.618869 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 03:38:30.620531 kernel: ACPI: button: Power Button [PWRF] May 13 03:38:30.644280 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 03:38:30.673874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 03:38:30.683279 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 13 03:38:30.687251 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 13 03:38:30.698479 kernel: mousedev: PS/2 mouse device common for all mice May 13 03:38:30.699967 kernel: Console: switching to colour dummy device 80x25 May 13 03:38:30.700244 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 13 03:38:30.700292 kernel: [drm] features: -context_init May 13 03:38:30.703790 kernel: [drm] number of scanouts: 1 May 13 03:38:30.703828 kernel: [drm] number of cap sets: 0 May 13 03:38:30.706244 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 13 03:38:30.720520 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 13 03:38:30.720600 kernel: Console: switching to colour frame buffer device 160x50 May 13 03:38:30.727298 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 13 03:38:30.734768 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 03:38:30.735072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 03:38:30.738932 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 03:38:30.745480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 03:38:30.750307 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 03:38:30.760353 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 03:38:30.780307 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 03:38:30.809100 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 03:38:30.809339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 03:38:30.811350 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 03:38:30.829340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 03:38:30.832997 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 03:38:30.831264 systemd[1]: Reached target sysinit.target - System Initialization. May 13 03:38:30.831516 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 03:38:30.831659 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 03:38:30.831989 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 03:38:30.832221 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 03:38:30.834866 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 03:38:30.834950 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 03:38:30.834980 systemd[1]: Reached target paths.target - Path Units. May 13 03:38:30.835040 systemd[1]: Reached target timers.target - Timer Units. May 13 03:38:30.837636 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 03:38:30.839434 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 03:38:30.845937 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 03:38:30.846215 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 03:38:30.846333 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 03:38:30.849669 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 03:38:30.850319 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 03:38:30.851904 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 03:38:30.854719 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 03:38:30.857865 systemd[1]: Reached target sockets.target - Socket Units. May 13 03:38:30.860386 systemd[1]: Reached target basic.target - Basic System. May 13 03:38:30.861300 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 03:38:30.861397 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 03:38:30.864330 systemd[1]: Starting containerd.service - containerd container runtime... May 13 03:38:30.880743 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 03:38:30.886172 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 03:38:30.898311 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 03:38:30.906740 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 03:38:30.907616 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 03:38:30.914395 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 03:38:30.917541 jq[1461]: false May 13 03:38:30.918436 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 03:38:30.924498 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 03:38:30.929971 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 03:38:30.940466 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 03:38:30.942063 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 03:38:30.944993 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 03:38:30.947951 systemd[1]: Starting update-engine.service - Update Engine... May 13 03:38:30.953845 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 03:38:30.958877 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 03:38:30.959135 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 03:38:30.960606 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 03:38:30.960821 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 03:38:30.986272 extend-filesystems[1462]: Found loop4 May 13 03:38:30.986272 extend-filesystems[1462]: Found loop5 May 13 03:38:30.986272 extend-filesystems[1462]: Found loop6 May 13 03:38:30.986272 extend-filesystems[1462]: Found loop7 May 13 03:38:30.986272 extend-filesystems[1462]: Found vda May 13 03:38:30.986272 extend-filesystems[1462]: Found vda1 May 13 03:38:30.986272 extend-filesystems[1462]: Found vda2 May 13 03:38:30.986272 extend-filesystems[1462]: Found vda3 May 13 03:38:30.986272 extend-filesystems[1462]: Found usr May 13 03:38:31.015264 extend-filesystems[1462]: Found vda4 May 13 03:38:31.015264 extend-filesystems[1462]: Found vda6 May 13 03:38:31.015264 extend-filesystems[1462]: Found vda7 May 13 03:38:31.015264 extend-filesystems[1462]: Found vda9 May 13 03:38:31.015264 extend-filesystems[1462]: Checking size of /dev/vda9 May 13 03:38:30.995354 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 03:38:30.994887 dbus-daemon[1458]: [system] SELinux support is enabled May 13 03:38:31.024651 jq[1470]: true May 13 03:38:31.007668 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 03:38:31.007713 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 03:38:31.009186 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 03:38:31.009204 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 03:38:31.020024 systemd[1]: motdgen.service: Deactivated successfully. May 13 03:38:31.021281 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 03:38:31.042310 jq[1487]: true May 13 03:38:31.038444 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 03:38:31.045773 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 03:38:31.056041 update_engine[1469]: I20250513 03:38:31.055939 1469 main.cc:92] Flatcar Update Engine starting May 13 03:38:31.059285 tar[1473]: linux-amd64/LICENSE May 13 03:38:31.059285 tar[1473]: linux-amd64/helm May 13 03:38:31.067850 systemd[1]: Started update-engine.service - Update Engine. May 13 03:38:31.073618 update_engine[1469]: I20250513 03:38:31.073289 1469 update_check_scheduler.cc:74] Next update check in 4m17s May 13 03:38:31.079578 extend-filesystems[1462]: Resized partition /dev/vda9 May 13 03:38:31.087733 extend-filesystems[1500]: resize2fs 1.47.2 (1-Jan-2025) May 13 03:38:31.145097 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 03:38:31.145141 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 03:38:31.145160 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1420) May 13 03:38:31.093203 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 03:38:31.150262 extend-filesystems[1500]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 03:38:31.150262 extend-filesystems[1500]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 03:38:31.150262 extend-filesystems[1500]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 03:38:31.150055 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 03:38:31.171480 extend-filesystems[1462]: Resized filesystem in /dev/vda9 May 13 03:38:31.151305 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 03:38:31.217333 bash[1514]: Updated "/home/core/.ssh/authorized_keys" May 13 03:38:31.203013 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 03:38:31.210548 systemd[1]: Starting sshkeys.service... May 13 03:38:31.216714 systemd-logind[1468]: New seat seat0. May 13 03:38:31.218156 systemd-logind[1468]: Watching system buttons on /dev/input/event1 (Power Button) May 13 03:38:31.218176 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 03:38:31.220087 systemd[1]: Started systemd-logind.service - User Login Management. May 13 03:38:31.303765 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 03:38:31.314204 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 03:38:31.381361 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 03:38:31.571924 containerd[1485]: time="2025-05-13T03:38:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 03:38:31.573828 containerd[1485]: time="2025-05-13T03:38:31.572624692Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 03:38:31.586345 containerd[1485]: time="2025-05-13T03:38:31.586285112Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.907µs" May 13 03:38:31.586345 containerd[1485]: time="2025-05-13T03:38:31.586340145Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 03:38:31.586425 containerd[1485]: time="2025-05-13T03:38:31.586368689Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 03:38:31.586636 containerd[1485]: time="2025-05-13T03:38:31.586602748Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 03:38:31.586683 containerd[1485]: time="2025-05-13T03:38:31.586640439Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 03:38:31.586708 containerd[1485]: time="2025-05-13T03:38:31.586682998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 03:38:31.586783 containerd[1485]: time="2025-05-13T03:38:31.586755835Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 03:38:31.586783 containerd[1485]: time="2025-05-13T03:38:31.586777436Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 03:38:31.587070 containerd[1485]: time="2025-05-13T03:38:31.587038756Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 03:38:31.587070 containerd[1485]: time="2025-05-13T03:38:31.587065967Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 03:38:31.587128 containerd[1485]: time="2025-05-13T03:38:31.587081877Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 03:38:31.587128 containerd[1485]: time="2025-05-13T03:38:31.587093819Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 03:38:31.587208 containerd[1485]: time="2025-05-13T03:38:31.587183307Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 03:38:31.587922 containerd[1485]: time="2025-05-13T03:38:31.587570503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 03:38:31.587922 containerd[1485]: time="2025-05-13T03:38:31.587610177Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 03:38:31.587922 containerd[1485]: time="2025-05-13T03:38:31.587622811Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 03:38:31.587922 containerd[1485]: time="2025-05-13T03:38:31.587733378Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 03:38:31.588129 containerd[1485]: time="2025-05-13T03:38:31.588052477Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 03:38:31.588214 containerd[1485]: time="2025-05-13T03:38:31.588187019Z" level=info msg="metadata content store policy set" policy=shared May 13 03:38:31.599364 containerd[1485]: time="2025-05-13T03:38:31.599320690Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 03:38:31.599440 containerd[1485]: time="2025-05-13T03:38:31.599386594Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 03:38:31.599440 containerd[1485]: time="2025-05-13T03:38:31.599405019Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 03:38:31.599440 containerd[1485]: time="2025-05-13T03:38:31.599419736Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 03:38:31.599440 containerd[1485]: time="2025-05-13T03:38:31.599435356Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 03:38:31.599594 containerd[1485]: time="2025-05-13T03:38:31.599450223Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 03:38:31.599594 containerd[1485]: time="2025-05-13T03:38:31.599468237Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 03:38:31.599594 containerd[1485]: time="2025-05-13T03:38:31.599482534Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 03:38:31.599594 containerd[1485]: time="2025-05-13T03:38:31.599494446Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 03:38:31.599594 containerd[1485]: time="2025-05-13T03:38:31.599508533Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 03:38:31.599594 containerd[1485]: time="2025-05-13T03:38:31.599519413Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 03:38:31.599594 containerd[1485]: time="2025-05-13T03:38:31.599532378Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 03:38:31.599739 containerd[1485]: time="2025-05-13T03:38:31.599652262Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 03:38:31.599739 containerd[1485]: time="2025-05-13T03:38:31.599677550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 03:38:31.599739 containerd[1485]: time="2025-05-13T03:38:31.599698639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 03:38:31.599739 containerd[1485]: time="2025-05-13T03:38:31.599713076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 03:38:31.599739 containerd[1485]: time="2025-05-13T03:38:31.599725610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 03:38:31.599739 containerd[1485]: time="2025-05-13T03:38:31.599737272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 03:38:31.599864 containerd[1485]: time="2025-05-13T03:38:31.599751428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 03:38:31.599864 containerd[1485]: time="2025-05-13T03:38:31.599764613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 03:38:31.599864 containerd[1485]: time="2025-05-13T03:38:31.599777928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 03:38:31.599864 containerd[1485]: time="2025-05-13T03:38:31.599789890Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 03:38:31.599864 containerd[1485]: time="2025-05-13T03:38:31.599804788Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 03:38:31.600112 containerd[1485]: time="2025-05-13T03:38:31.599869239Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 03:38:31.600112 containerd[1485]: time="2025-05-13T03:38:31.599885199Z" level=info msg="Start snapshots syncer" May 13 03:38:31.600112 containerd[1485]: time="2025-05-13T03:38:31.599915115Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 03:38:31.600206 containerd[1485]: time="2025-05-13T03:38:31.600163431Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 03:38:31.600420 containerd[1485]: time="2025-05-13T03:38:31.600218745Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 03:38:31.600420 containerd[1485]: time="2025-05-13T03:38:31.600319163Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 03:38:31.600420 containerd[1485]: time="2025-05-13T03:38:31.600412658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 03:38:31.600513 containerd[1485]: time="2025-05-13T03:38:31.600438808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 03:38:31.600513 containerd[1485]: time="2025-05-13T03:38:31.600451682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 03:38:31.600513 containerd[1485]: time="2025-05-13T03:38:31.600462472Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 03:38:31.600513 containerd[1485]: time="2025-05-13T03:38:31.600476238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 03:38:31.600513 containerd[1485]: time="2025-05-13T03:38:31.600487900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 03:38:31.600513 containerd[1485]: time="2025-05-13T03:38:31.600500223Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600522945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600537102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600549104Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600587546Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600604067Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600614788Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600625798Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600634985Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600645665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600657898Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600675371Z" level=info msg="runtime interface created" May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600681433Z" level=info msg="created NRI interface" May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600690630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 03:38:31.600701 containerd[1485]: time="2025-05-13T03:38:31.600702302Z" level=info msg="Connect containerd service" May 13 03:38:31.601000 containerd[1485]: time="2025-05-13T03:38:31.600730204Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 03:38:31.602036 containerd[1485]: time="2025-05-13T03:38:31.601596509Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 03:38:31.710825 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 03:38:31.741399 systemd-networkd[1381]: eth0: Gained IPv6LL May 13 03:38:31.744218 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. May 13 03:38:31.746214 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 03:38:31.754791 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 03:38:31.764656 systemd[1]: Reached target network-online.target - Network is Online. May 13 03:38:31.785790 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 03:38:31.792894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 03:38:31.801687 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 03:38:31.809649 systemd[1]: Started sshd@0-172.24.4.174:22-172.24.4.1:34440.service - OpenSSH per-connection server daemon (172.24.4.1:34440). May 13 03:38:31.823574 systemd[1]: issuegen.service: Deactivated successfully. May 13 03:38:31.823773 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 03:38:31.838817 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 03:38:31.879735 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 03:38:31.889090 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 03:38:31.896753 containerd[1485]: time="2025-05-13T03:38:31.895355504Z" level=info msg="Start subscribing containerd event" May 13 03:38:31.895475 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 03:38:31.897698 systemd[1]: Reached target getty.target - Login Prompts. May 13 03:38:31.898752 containerd[1485]: time="2025-05-13T03:38:31.898276452Z" level=info msg="Start recovering state" May 13 03:38:31.898752 containerd[1485]: time="2025-05-13T03:38:31.898448986Z" level=info msg="Start event monitor" May 13 03:38:31.898752 containerd[1485]: time="2025-05-13T03:38:31.898468803Z" level=info msg="Start cni network conf syncer for default" May 13 03:38:31.898752 containerd[1485]: time="2025-05-13T03:38:31.898478401Z" level=info msg="Start streaming server" May 13 03:38:31.898752 containerd[1485]: time="2025-05-13T03:38:31.898489622Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 03:38:31.898752 containerd[1485]: time="2025-05-13T03:38:31.898500021Z" level=info msg="runtime interface starting up..." May 13 03:38:31.898752 containerd[1485]: time="2025-05-13T03:38:31.898613735Z" level=info msg="starting plugins..." May 13 03:38:31.898752 containerd[1485]: time="2025-05-13T03:38:31.898628492Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 03:38:31.904014 containerd[1485]: time="2025-05-13T03:38:31.903363032Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 03:38:31.904014 containerd[1485]: time="2025-05-13T03:38:31.903622288Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 03:38:31.903867 systemd[1]: Started containerd.service - containerd container runtime. May 13 03:38:31.904137 containerd[1485]: time="2025-05-13T03:38:31.904040643Z" level=info msg="containerd successfully booted in 0.332515s" May 13 03:38:31.910644 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 03:38:32.067500 tar[1473]: linux-amd64/README.md May 13 03:38:32.087667 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 03:38:32.770062 sshd[1555]: Accepted publickey for core from 172.24.4.1 port 34440 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:38:32.773431 sshd-session[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:38:32.792742 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 03:38:32.802303 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 03:38:32.840101 systemd-logind[1468]: New session 1 of user core. May 13 03:38:32.853096 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 03:38:32.862370 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 03:38:32.881538 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 03:38:32.885830 systemd-logind[1468]: New session c1 of user core. May 13 03:38:33.070862 systemd[1585]: Queued start job for default target default.target. May 13 03:38:33.075291 systemd[1585]: Created slice app.slice - User Application Slice. May 13 03:38:33.075323 systemd[1585]: Reached target paths.target - Paths. May 13 03:38:33.075369 systemd[1585]: Reached target timers.target - Timers. May 13 03:38:33.076831 systemd[1585]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 03:38:33.116370 systemd[1585]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 03:38:33.116502 systemd[1585]: Reached target sockets.target - Sockets. May 13 03:38:33.116547 systemd[1585]: Reached target basic.target - Basic System. May 13 03:38:33.116587 systemd[1585]: Reached target default.target - Main User Target. May 13 03:38:33.116622 systemd[1585]: Startup finished in 217ms. May 13 03:38:33.116744 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 03:38:33.126567 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 03:38:33.588592 systemd[1]: Started sshd@1-172.24.4.174:22-172.24.4.1:46628.service - OpenSSH per-connection server daemon (172.24.4.1:46628). May 13 03:38:33.672004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:38:33.682563 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 03:38:34.854072 kubelet[1601]: E0513 03:38:34.853998 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 03:38:34.858745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 03:38:34.858919 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 03:38:34.859322 systemd[1]: kubelet.service: Consumed 1.860s CPU time, 256.6M memory peak. May 13 03:38:35.702116 sshd[1596]: Accepted publickey for core from 172.24.4.1 port 46628 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:38:35.705098 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:38:35.715758 systemd-logind[1468]: New session 2 of user core. May 13 03:38:35.725720 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 03:38:36.447757 sshd[1612]: Connection closed by 172.24.4.1 port 46628 May 13 03:38:36.448921 sshd-session[1596]: pam_unix(sshd:session): session closed for user core May 13 03:38:36.468609 systemd[1]: sshd@1-172.24.4.174:22-172.24.4.1:46628.service: Deactivated successfully. May 13 03:38:36.472140 systemd[1]: session-2.scope: Deactivated successfully. May 13 03:38:36.474460 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. May 13 03:38:36.480382 systemd[1]: Started sshd@2-172.24.4.174:22-172.24.4.1:46644.service - OpenSSH per-connection server daemon (172.24.4.1:46644). May 13 03:38:36.488563 systemd-logind[1468]: Removed session 2. May 13 03:38:36.974069 login[1573]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 03:38:36.988669 systemd-logind[1468]: New session 3 of user core. May 13 03:38:36.996944 login[1575]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 03:38:36.998327 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 03:38:37.010652 systemd-logind[1468]: New session 4 of user core. May 13 03:38:37.019675 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 03:38:37.831140 sshd[1617]: Accepted publickey for core from 172.24.4.1 port 46644 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:38:37.834488 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:38:37.846839 systemd-logind[1468]: New session 5 of user core. May 13 03:38:37.856733 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 03:38:37.969581 coreos-metadata[1457]: May 13 03:38:37.969 WARN failed to locate config-drive, using the metadata service API instead May 13 03:38:38.020462 coreos-metadata[1457]: May 13 03:38:38.020 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 13 03:38:38.384296 coreos-metadata[1457]: May 13 03:38:38.384 INFO Fetch successful May 13 03:38:38.384466 coreos-metadata[1457]: May 13 03:38:38.384 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 03:38:38.393178 coreos-metadata[1457]: May 13 03:38:38.393 INFO Fetch successful May 13 03:38:38.393178 coreos-metadata[1457]: May 13 03:38:38.393 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 13 03:38:38.407513 coreos-metadata[1457]: May 13 03:38:38.407 INFO Fetch successful May 13 03:38:38.407513 coreos-metadata[1457]: May 13 03:38:38.407 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 13 03:38:38.421430 coreos-metadata[1457]: May 13 03:38:38.421 INFO Fetch successful May 13 03:38:38.421506 coreos-metadata[1457]: May 13 03:38:38.421 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 13 03:38:38.429606 coreos-metadata[1457]: May 13 03:38:38.429 INFO Fetch successful May 13 03:38:38.429606 coreos-metadata[1457]: May 13 03:38:38.429 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 13 03:38:38.436487 coreos-metadata[1520]: May 13 03:38:38.436 WARN failed to locate config-drive, using the metadata service API instead May 13 03:38:38.444571 coreos-metadata[1457]: May 13 03:38:38.444 INFO Fetch successful May 13 03:38:38.482296 coreos-metadata[1520]: May 13 03:38:38.481 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 03:38:38.498660 coreos-metadata[1520]: May 13 03:38:38.498 INFO Fetch successful May 13 03:38:38.498660 coreos-metadata[1520]: May 13 03:38:38.498 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 03:38:38.501835 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 03:38:38.504842 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 03:38:38.514635 coreos-metadata[1520]: May 13 03:38:38.514 INFO Fetch successful May 13 03:38:38.520566 unknown[1520]: wrote ssh authorized keys file for user: core May 13 03:38:38.561300 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" May 13 03:38:38.562064 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 03:38:38.564678 systemd[1]: Finished sshkeys.service. May 13 03:38:38.571466 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 03:38:38.571831 systemd[1]: Startup finished in 1.242s (kernel) + 16.746s (initrd) + 11.087s (userspace) = 29.075s. May 13 03:38:38.579620 sshd[1646]: Connection closed by 172.24.4.1 port 46644 May 13 03:38:38.580717 sshd-session[1617]: pam_unix(sshd:session): session closed for user core May 13 03:38:38.586495 systemd[1]: sshd@2-172.24.4.174:22-172.24.4.1:46644.service: Deactivated successfully. May 13 03:38:38.590276 systemd[1]: session-5.scope: Deactivated successfully. May 13 03:38:38.593825 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. May 13 03:38:38.596528 systemd-logind[1468]: Removed session 5. May 13 03:38:45.054518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 03:38:45.058833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 03:38:45.416328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:38:45.432545 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 03:38:45.568298 kubelet[1671]: E0513 03:38:45.568012 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 03:38:45.575698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 03:38:45.576141 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 03:38:45.577384 systemd[1]: kubelet.service: Consumed 328ms CPU time, 102M memory peak. May 13 03:38:48.602452 systemd[1]: Started sshd@3-172.24.4.174:22-172.24.4.1:50940.service - OpenSSH per-connection server daemon (172.24.4.1:50940). May 13 03:38:50.058548 sshd[1680]: Accepted publickey for core from 172.24.4.1 port 50940 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:38:50.061801 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:38:50.074394 systemd-logind[1468]: New session 6 of user core. May 13 03:38:50.084701 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 03:38:50.940674 sshd[1682]: Connection closed by 172.24.4.1 port 50940 May 13 03:38:50.940446 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 13 03:38:50.960435 systemd[1]: sshd@3-172.24.4.174:22-172.24.4.1:50940.service: Deactivated successfully. May 13 03:38:50.964664 systemd[1]: session-6.scope: Deactivated successfully. May 13 03:38:50.966652 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. May 13 03:38:50.971386 systemd[1]: Started sshd@4-172.24.4.174:22-172.24.4.1:50946.service - OpenSSH per-connection server daemon (172.24.4.1:50946). May 13 03:38:50.974314 systemd-logind[1468]: Removed session 6. May 13 03:38:52.983944 sshd[1687]: Accepted publickey for core from 172.24.4.1 port 50946 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:38:52.986685 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:38:52.999719 systemd-logind[1468]: New session 7 of user core. May 13 03:38:53.007638 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 03:38:53.765299 sshd[1690]: Connection closed by 172.24.4.1 port 50946 May 13 03:38:53.764485 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 13 03:38:53.784569 systemd[1]: sshd@4-172.24.4.174:22-172.24.4.1:50946.service: Deactivated successfully. May 13 03:38:53.787908 systemd[1]: session-7.scope: Deactivated successfully. May 13 03:38:53.790727 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. May 13 03:38:53.794768 systemd[1]: Started sshd@5-172.24.4.174:22-172.24.4.1:44256.service - OpenSSH per-connection server daemon (172.24.4.1:44256). May 13 03:38:53.798074 systemd-logind[1468]: Removed session 7. May 13 03:38:55.183290 sshd[1695]: Accepted publickey for core from 172.24.4.1 port 44256 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:38:55.186652 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:38:55.200363 systemd-logind[1468]: New session 8 of user core. May 13 03:38:55.212606 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 03:38:55.655188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 03:38:55.658492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 03:38:55.968591 sshd[1698]: Connection closed by 172.24.4.1 port 44256 May 13 03:38:55.970042 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 13 03:38:55.988520 systemd[1]: sshd@5-172.24.4.174:22-172.24.4.1:44256.service: Deactivated successfully. May 13 03:38:55.993576 systemd[1]: session-8.scope: Deactivated successfully. May 13 03:38:55.999470 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. May 13 03:38:56.003426 systemd[1]: Started sshd@6-172.24.4.174:22-172.24.4.1:44272.service - OpenSSH per-connection server daemon (172.24.4.1:44272). May 13 03:38:56.007441 systemd-logind[1468]: Removed session 8. May 13 03:38:56.036467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:38:56.048802 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 03:38:56.119694 kubelet[1713]: E0513 03:38:56.119633 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 03:38:56.122841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 03:38:56.123104 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 03:38:56.124307 systemd[1]: kubelet.service: Consumed 290ms CPU time, 103.8M memory peak. May 13 03:38:57.247269 sshd[1708]: Accepted publickey for core from 172.24.4.1 port 44272 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:38:57.250162 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:38:57.260119 systemd-logind[1468]: New session 9 of user core. May 13 03:38:57.271533 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 03:38:57.940666 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 03:38:57.942523 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 03:38:57.973902 sudo[1722]: pam_unix(sudo:session): session closed for user root May 13 03:38:58.171289 sshd[1721]: Connection closed by 172.24.4.1 port 44272 May 13 03:38:58.172536 sshd-session[1708]: pam_unix(sshd:session): session closed for user core May 13 03:38:58.194328 systemd[1]: sshd@6-172.24.4.174:22-172.24.4.1:44272.service: Deactivated successfully. May 13 03:38:58.200843 systemd[1]: session-9.scope: Deactivated successfully. May 13 03:38:58.205595 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. May 13 03:38:58.209936 systemd[1]: Started sshd@7-172.24.4.174:22-172.24.4.1:44284.service - OpenSSH per-connection server daemon (172.24.4.1:44284). May 13 03:38:58.214933 systemd-logind[1468]: Removed session 9. May 13 03:38:59.385264 sshd[1727]: Accepted publickey for core from 172.24.4.1 port 44284 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:38:59.387957 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:38:59.402266 systemd-logind[1468]: New session 10 of user core. May 13 03:38:59.408559 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 03:38:59.865735 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 03:38:59.866423 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 03:38:59.873433 sudo[1732]: pam_unix(sudo:session): session closed for user root May 13 03:38:59.884753 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 03:38:59.886020 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 03:38:59.907934 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 03:38:59.974190 augenrules[1754]: No rules May 13 03:38:59.976399 systemd[1]: audit-rules.service: Deactivated successfully. May 13 03:38:59.976959 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 03:38:59.979438 sudo[1731]: pam_unix(sudo:session): session closed for user root May 13 03:39:00.174398 sshd[1730]: Connection closed by 172.24.4.1 port 44284 May 13 03:39:00.172864 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 13 03:39:00.192333 systemd[1]: sshd@7-172.24.4.174:22-172.24.4.1:44284.service: Deactivated successfully. May 13 03:39:00.196199 systemd[1]: session-10.scope: Deactivated successfully. May 13 03:39:00.199165 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. May 13 03:39:00.202641 systemd[1]: Started sshd@8-172.24.4.174:22-172.24.4.1:44290.service - OpenSSH per-connection server daemon (172.24.4.1:44290). May 13 03:39:00.205394 systemd-logind[1468]: Removed session 10. May 13 03:39:01.319822 sshd[1762]: Accepted publickey for core from 172.24.4.1 port 44290 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:39:01.322353 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:39:01.336205 systemd-logind[1468]: New session 11 of user core. May 13 03:39:01.342555 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 03:39:01.798041 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 03:39:01.798723 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 03:39:02.122154 systemd-timesyncd[1366]: Contacted time server 45.79.82.45:123 (2.flatcar.pool.ntp.org). May 13 03:39:02.122215 systemd-timesyncd[1366]: Initial clock synchronization to Tue 2025-05-13 03:39:02.418105 UTC. May 13 03:39:02.583421 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 03:39:02.602725 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 03:39:03.128788 dockerd[1783]: time="2025-05-13T03:39:03.128072950Z" level=info msg="Starting up" May 13 03:39:03.136624 dockerd[1783]: time="2025-05-13T03:39:03.136541192Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 03:39:03.183862 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3795628197-merged.mount: Deactivated successfully. May 13 03:39:03.256580 dockerd[1783]: time="2025-05-13T03:39:03.256157501Z" level=info msg="Loading containers: start." May 13 03:39:03.511403 kernel: Initializing XFRM netlink socket May 13 03:39:03.810225 systemd-networkd[1381]: docker0: Link UP May 13 03:39:03.891038 dockerd[1783]: time="2025-05-13T03:39:03.890824333Z" level=info msg="Loading containers: done." May 13 03:39:03.923448 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3821144774-merged.mount: Deactivated successfully. May 13 03:39:03.926747 dockerd[1783]: time="2025-05-13T03:39:03.925371631Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 03:39:03.926747 dockerd[1783]: time="2025-05-13T03:39:03.925520588Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 03:39:03.926747 dockerd[1783]: time="2025-05-13T03:39:03.925736163Z" level=info msg="Daemon has completed initialization" May 13 03:39:03.987878 dockerd[1783]: time="2025-05-13T03:39:03.987790246Z" level=info msg="API listen on /run/docker.sock" May 13 03:39:03.987865 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 03:39:05.701338 containerd[1485]: time="2025-05-13T03:39:05.701204073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 03:39:06.304183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 03:39:06.307830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 03:39:06.551933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:39:06.562692 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 03:39:06.609421 kubelet[1990]: E0513 03:39:06.609352 1990 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 03:39:06.614145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 03:39:06.614580 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 03:39:06.615550 systemd[1]: kubelet.service: Consumed 226ms CPU time, 105.1M memory peak. May 13 03:39:06.778533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012812769.mount: Deactivated successfully. May 13 03:39:08.548986 containerd[1485]: time="2025-05-13T03:39:08.548655729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:08.550083 containerd[1485]: time="2025-05-13T03:39:08.549876830Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" May 13 03:39:08.551276 containerd[1485]: time="2025-05-13T03:39:08.551222980Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:08.554473 containerd[1485]: time="2025-05-13T03:39:08.554418902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:08.556147 containerd[1485]: time="2025-05-13T03:39:08.555378386Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.854060848s" May 13 03:39:08.556147 containerd[1485]: time="2025-05-13T03:39:08.555413842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 03:39:08.556147 containerd[1485]: time="2025-05-13T03:39:08.556014989Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 03:39:10.558637 containerd[1485]: time="2025-05-13T03:39:10.558425710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:10.559945 containerd[1485]: time="2025-05-13T03:39:10.559657699Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" May 13 03:39:10.561170 containerd[1485]: time="2025-05-13T03:39:10.561113988Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:10.564523 containerd[1485]: time="2025-05-13T03:39:10.564090326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:10.565090 containerd[1485]: time="2025-05-13T03:39:10.565056629Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.009015652s" May 13 03:39:10.565141 containerd[1485]: time="2025-05-13T03:39:10.565089755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 03:39:10.565833 containerd[1485]: time="2025-05-13T03:39:10.565669443Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 03:39:12.361908 containerd[1485]: time="2025-05-13T03:39:12.361837063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:12.363225 containerd[1485]: time="2025-05-13T03:39:12.363167211Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" May 13 03:39:12.364771 containerd[1485]: time="2025-05-13T03:39:12.364727500Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:12.369699 containerd[1485]: time="2025-05-13T03:39:12.368583699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:12.369699 containerd[1485]: time="2025-05-13T03:39:12.369597519Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.803899077s" May 13 03:39:12.369699 containerd[1485]: time="2025-05-13T03:39:12.369623342Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 03:39:12.371332 containerd[1485]: time="2025-05-13T03:39:12.371292996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 03:39:13.775822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1553750526.mount: Deactivated successfully. May 13 03:39:14.510828 containerd[1485]: time="2025-05-13T03:39:14.510698204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:14.526061 containerd[1485]: time="2025-05-13T03:39:14.525858691Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" May 13 03:39:14.543417 containerd[1485]: time="2025-05-13T03:39:14.543269659Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:14.567838 containerd[1485]: time="2025-05-13T03:39:14.567769575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:14.569441 containerd[1485]: time="2025-05-13T03:39:14.569298466Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.197914264s" May 13 03:39:14.569441 containerd[1485]: time="2025-05-13T03:39:14.569328920Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 03:39:14.570172 containerd[1485]: time="2025-05-13T03:39:14.569999513Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 03:39:15.243641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3868774634.mount: Deactivated successfully. May 13 03:39:16.509884 update_engine[1469]: I20250513 03:39:16.509727 1469 update_attempter.cc:509] Updating boot flags... May 13 03:39:16.540405 containerd[1485]: time="2025-05-13T03:39:16.540292777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:16.544434 containerd[1485]: time="2025-05-13T03:39:16.544329141Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 13 03:39:16.551516 containerd[1485]: time="2025-05-13T03:39:16.551332998Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:16.558352 containerd[1485]: time="2025-05-13T03:39:16.557030847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:16.562358 containerd[1485]: time="2025-05-13T03:39:16.562290369Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.992203864s" May 13 03:39:16.565020 containerd[1485]: time="2025-05-13T03:39:16.564320027Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 03:39:16.566659 containerd[1485]: time="2025-05-13T03:39:16.566584937Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 03:39:16.606305 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2126) May 13 03:39:16.618112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 03:39:16.622416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 03:39:16.706342 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2128) May 13 03:39:17.024880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:39:17.038835 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 03:39:17.132805 kubelet[2141]: E0513 03:39:17.132706 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 03:39:17.137303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 03:39:17.137834 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 03:39:17.139013 systemd[1]: kubelet.service: Consumed 255ms CPU time, 105.5M memory peak. May 13 03:39:17.567231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232435199.mount: Deactivated successfully. May 13 03:39:17.579011 containerd[1485]: time="2025-05-13T03:39:17.578772648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 03:39:17.581320 containerd[1485]: time="2025-05-13T03:39:17.580952285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 13 03:39:17.583562 containerd[1485]: time="2025-05-13T03:39:17.583452540Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 03:39:17.590295 containerd[1485]: time="2025-05-13T03:39:17.589397279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 03:39:17.592534 containerd[1485]: time="2025-05-13T03:39:17.592455218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.02574967s" May 13 03:39:17.592785 containerd[1485]: time="2025-05-13T03:39:17.592715371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 03:39:17.593889 containerd[1485]: time="2025-05-13T03:39:17.593841875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 03:39:18.230714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042899188.mount: Deactivated successfully. May 13 03:39:21.114698 containerd[1485]: time="2025-05-13T03:39:21.114604192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:21.116167 containerd[1485]: time="2025-05-13T03:39:21.116111134Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 13 03:39:21.117752 containerd[1485]: time="2025-05-13T03:39:21.117706427Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:21.121272 containerd[1485]: time="2025-05-13T03:39:21.121215741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:21.122921 containerd[1485]: time="2025-05-13T03:39:21.122808500Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.528677495s" May 13 03:39:21.122921 containerd[1485]: time="2025-05-13T03:39:21.122841502Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 03:39:25.142836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:39:25.143257 systemd[1]: kubelet.service: Consumed 255ms CPU time, 105.5M memory peak. May 13 03:39:25.145404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 03:39:25.195998 systemd[1]: Reload requested from client PID 2233 ('systemctl') (unit session-11.scope)... May 13 03:39:25.196015 systemd[1]: Reloading... May 13 03:39:25.315281 zram_generator::config[2279]: No configuration found. May 13 03:39:25.479583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 03:39:25.600680 systemd[1]: Reloading finished in 404 ms. May 13 03:39:25.649721 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 03:39:25.649807 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 03:39:25.650078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:39:25.650128 systemd[1]: kubelet.service: Consumed 111ms CPU time, 91.8M memory peak. May 13 03:39:25.651622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 03:39:25.788055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:39:25.797612 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 03:39:25.844454 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 03:39:25.844454 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 03:39:25.844454 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 03:39:25.844834 kubelet[2346]: I0513 03:39:25.844765 2346 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 03:39:26.368432 kubelet[2346]: I0513 03:39:26.368403 2346 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 03:39:26.369136 kubelet[2346]: I0513 03:39:26.368585 2346 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 03:39:26.369136 kubelet[2346]: I0513 03:39:26.368871 2346 server.go:954] "Client rotation is on, will bootstrap in background" May 13 03:39:27.398592 kubelet[2346]: E0513 03:39:27.398508 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.174:6443: connect: connection refused" logger="UnhandledError" May 13 03:39:27.400866 kubelet[2346]: I0513 03:39:27.400654 2346 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 03:39:27.430784 kubelet[2346]: I0513 03:39:27.429329 2346 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 03:39:27.436324 kubelet[2346]: I0513 03:39:27.436214 2346 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 03:39:27.436757 kubelet[2346]: I0513 03:39:27.436682 2346 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 03:39:27.437206 kubelet[2346]: I0513 03:39:27.436748 2346 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-62b177a255.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 03:39:27.437206 kubelet[2346]: I0513 03:39:27.437186 2346 topology_manager.go:138] "Creating topology manager with none policy" May 13 03:39:27.437549 kubelet[2346]: I0513 03:39:27.437213 2346 container_manager_linux.go:304] "Creating device plugin manager" May 13 03:39:27.437549 kubelet[2346]: I0513 03:39:27.437479 2346 state_mem.go:36] "Initialized new in-memory state store" May 13 03:39:27.446058 kubelet[2346]: I0513 03:39:27.445967 2346 kubelet.go:446] "Attempting to sync node with API server" May 13 03:39:27.446058 kubelet[2346]: I0513 03:39:27.446018 2346 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 03:39:27.448441 kubelet[2346]: I0513 03:39:27.446068 2346 kubelet.go:352] "Adding apiserver pod source" May 13 03:39:27.448441 kubelet[2346]: I0513 03:39:27.446092 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 03:39:27.464286 kubelet[2346]: W0513 03:39:27.463209 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused May 13 03:39:27.464286 kubelet[2346]: E0513 03:39:27.463371 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.174:6443: connect: connection refused" logger="UnhandledError" May 13 03:39:27.464286 kubelet[2346]: I0513 03:39:27.463535 2346 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 03:39:27.464846 kubelet[2346]: I0513 03:39:27.464814 2346 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 03:39:27.466405 kubelet[2346]: W0513 03:39:27.466373 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 03:39:27.469982 kubelet[2346]: W0513 03:39:27.469338 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-62b177a255.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused May 13 03:39:27.469982 kubelet[2346]: E0513 03:39:27.469464 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-62b177a255.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.174:6443: connect: connection refused" logger="UnhandledError" May 13 03:39:27.471650 kubelet[2346]: I0513 03:39:27.471587 2346 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 03:39:27.471767 kubelet[2346]: I0513 03:39:27.471692 2346 server.go:1287] "Started kubelet" May 13 03:39:27.473716 kubelet[2346]: I0513 03:39:27.471929 2346 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 03:39:27.474397 kubelet[2346]: I0513 03:39:27.474364 2346 server.go:490] "Adding debug handlers to kubelet server" May 13 03:39:27.479887 kubelet[2346]: I0513 03:39:27.479774 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 03:39:27.480078 kubelet[2346]: I0513 03:39:27.480049 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 03:39:27.480352 kubelet[2346]: I0513 03:39:27.480300 2346 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 03:39:27.483598 kubelet[2346]: E0513 03:39:27.480644 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.174:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.174:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-62b177a255.novalocal.183ef913778b0885 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-62b177a255.novalocal,UID:ci-4284-0-0-n-62b177a255.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-62b177a255.novalocal,},FirstTimestamp:2025-05-13 03:39:27.471626373 +0000 UTC m=+1.670641165,LastTimestamp:2025-05-13 03:39:27.471626373 +0000 UTC m=+1.670641165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-62b177a255.novalocal,}" May 13 03:39:27.484423 kubelet[2346]: I0513 03:39:27.484371 2346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 03:39:27.490423 kubelet[2346]: E0513 03:39:27.490346 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" May 13 03:39:27.490423 kubelet[2346]: I0513 03:39:27.490410 2346 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 03:39:27.491040 kubelet[2346]: I0513 03:39:27.490985 2346 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 03:39:27.491151 kubelet[2346]: I0513 03:39:27.491096 2346 reconciler.go:26] "Reconciler: start to sync state" May 13 03:39:27.494469 kubelet[2346]: W0513 03:39:27.494366 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused May 13 03:39:27.494636 kubelet[2346]: E0513 03:39:27.494489 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.174:6443: connect: connection refused" logger="UnhandledError" May 13 03:39:27.495009 kubelet[2346]: I0513 03:39:27.494947 2346 factory.go:221] Registration of the systemd container factory successfully May 13 03:39:27.495333 kubelet[2346]: I0513 03:39:27.495098 2346 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 03:39:27.497955 kubelet[2346]: E0513 03:39:27.497905 2346 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 03:39:27.499045 kubelet[2346]: I0513 03:39:27.498287 2346 factory.go:221] Registration of the containerd container factory successfully May 13 03:39:27.499427 kubelet[2346]: E0513 03:39:27.499368 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-62b177a255.novalocal?timeout=10s\": dial tcp 172.24.4.174:6443: connect: connection refused" interval="200ms" May 13 03:39:27.527627 kubelet[2346]: I0513 03:39:27.527198 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 03:39:27.528331 kubelet[2346]: I0513 03:39:27.528083 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 03:39:27.528331 kubelet[2346]: I0513 03:39:27.528103 2346 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 03:39:27.528331 kubelet[2346]: I0513 03:39:27.528123 2346 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 03:39:27.528331 kubelet[2346]: I0513 03:39:27.528132 2346 kubelet.go:2388] "Starting kubelet main sync loop" May 13 03:39:27.528331 kubelet[2346]: E0513 03:39:27.528173 2346 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 03:39:27.531356 kubelet[2346]: W0513 03:39:27.531189 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused May 13 03:39:27.531356 kubelet[2346]: E0513 03:39:27.531223 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.174:6443: connect: connection refused" logger="UnhandledError" May 13 03:39:27.531748 kubelet[2346]: I0513 03:39:27.531694 2346 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 03:39:27.531748 kubelet[2346]: I0513 03:39:27.531717 2346 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 03:39:27.531748 kubelet[2346]: I0513 03:39:27.531732 2346 state_mem.go:36] "Initialized new in-memory state store" May 13 03:39:27.537757 kubelet[2346]: I0513 03:39:27.537725 2346 policy_none.go:49] "None policy: Start" May 13 03:39:27.537757 kubelet[2346]: I0513 03:39:27.537749 2346 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 03:39:27.537757 kubelet[2346]: I0513 03:39:27.537760 2346 state_mem.go:35] "Initializing new in-memory state store" May 13 03:39:27.546789 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 03:39:27.559513 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 03:39:27.562778 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 03:39:27.570513 kubelet[2346]: I0513 03:39:27.570075 2346 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 03:39:27.570513 kubelet[2346]: I0513 03:39:27.570257 2346 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 03:39:27.570513 kubelet[2346]: I0513 03:39:27.570272 2346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 03:39:27.571205 kubelet[2346]: I0513 03:39:27.571092 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 03:39:27.571439 kubelet[2346]: E0513 03:39:27.571377 2346 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 03:39:27.571439 kubelet[2346]: E0513 03:39:27.571418 2346 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" May 13 03:39:27.654383 systemd[1]: Created slice kubepods-burstable-pod3f62da39a5a082cfe6636cfb24bd792c.slice - libcontainer container kubepods-burstable-pod3f62da39a5a082cfe6636cfb24bd792c.slice. May 13 03:39:27.673481 kubelet[2346]: I0513 03:39:27.673395 2346 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.674087 kubelet[2346]: E0513 03:39:27.674014 2346 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.174:6443/api/v1/nodes\": dial tcp 172.24.4.174:6443: connect: connection refused" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.678524 kubelet[2346]: E0513 03:39:27.678367 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.682913 systemd[1]: Created slice kubepods-burstable-podef6ac5e663fbecbe7af44bb9f5e5693b.slice - libcontainer container kubepods-burstable-podef6ac5e663fbecbe7af44bb9f5e5693b.slice. May 13 03:39:27.690773 systemd[1]: Created slice kubepods-burstable-podb7939c7d29c82be2e2ec7d16eb151d95.slice - libcontainer container kubepods-burstable-podb7939c7d29c82be2e2ec7d16eb151d95.slice. May 13 03:39:27.692760 kubelet[2346]: I0513 03:39:27.692700 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef6ac5e663fbecbe7af44bb9f5e5693b-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"ef6ac5e663fbecbe7af44bb9f5e5693b\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.692875 kubelet[2346]: I0513 03:39:27.692781 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.692875 kubelet[2346]: I0513 03:39:27.692840 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.693038 kubelet[2346]: I0513 03:39:27.692885 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f62da39a5a082cfe6636cfb24bd792c-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"3f62da39a5a082cfe6636cfb24bd792c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.693038 kubelet[2346]: I0513 03:39:27.692934 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f62da39a5a082cfe6636cfb24bd792c-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"3f62da39a5a082cfe6636cfb24bd792c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.693038 kubelet[2346]: I0513 03:39:27.692982 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f62da39a5a082cfe6636cfb24bd792c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"3f62da39a5a082cfe6636cfb24bd792c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.693038 kubelet[2346]: I0513 03:39:27.693026 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.693389 kubelet[2346]: I0513 03:39:27.693068 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.693389 kubelet[2346]: I0513 03:39:27.693115 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.695630 kubelet[2346]: E0513 03:39:27.695585 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.697026 kubelet[2346]: E0513 03:39:27.696992 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.702404 kubelet[2346]: E0513 03:39:27.702306 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-62b177a255.novalocal?timeout=10s\": dial tcp 172.24.4.174:6443: connect: connection refused" interval="400ms" May 13 03:39:27.877426 kubelet[2346]: I0513 03:39:27.877363 2346 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.878024 kubelet[2346]: E0513 03:39:27.877944 2346 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.174:6443/api/v1/nodes\": dial tcp 172.24.4.174:6443: connect: connection refused" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:27.982076 containerd[1485]: time="2025-05-13T03:39:27.981133860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal,Uid:3f62da39a5a082cfe6636cfb24bd792c,Namespace:kube-system,Attempt:0,}" May 13 03:39:27.998456 containerd[1485]: time="2025-05-13T03:39:27.997930773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal,Uid:ef6ac5e663fbecbe7af44bb9f5e5693b,Namespace:kube-system,Attempt:0,}" May 13 03:39:27.999695 containerd[1485]: time="2025-05-13T03:39:27.999590881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal,Uid:b7939c7d29c82be2e2ec7d16eb151d95,Namespace:kube-system,Attempt:0,}" May 13 03:39:28.074537 containerd[1485]: time="2025-05-13T03:39:28.062444057Z" level=info msg="connecting to shim e67fc13ee684f0ea6f8da3cbbd4984a657977f62aa4d30c9244ea0cb6790816e" address="unix:///run/containerd/s/9c3c225d175cdef827bcd06671383b19af2f7a35a4ac21bba5d4aa6b93a9a54e" namespace=k8s.io protocol=ttrpc version=3 May 13 03:39:28.104800 kubelet[2346]: E0513 03:39:28.103210 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-62b177a255.novalocal?timeout=10s\": dial tcp 172.24.4.174:6443: connect: connection refused" interval="800ms" May 13 03:39:28.118393 containerd[1485]: time="2025-05-13T03:39:28.118306800Z" level=info msg="connecting to shim 2c14a18e111e48a5ad454d6645630a8f7b475fda8b413df677e6a0c0e0266b3c" address="unix:///run/containerd/s/6fe3dc96729bce56bc449f36f0470ccaa882a58589c6092dca5768dcc7c3f38e" namespace=k8s.io protocol=ttrpc version=3 May 13 03:39:28.131806 containerd[1485]: time="2025-05-13T03:39:28.131666867Z" level=info msg="connecting to shim 9cdf8ebf7fb24eee9ec213bfc05fe5ac6e1b3c3572b9cce1a82cd46862ffe765" address="unix:///run/containerd/s/d6f651c6f81a24ed4e6e599443e97a966660a62cd111d18d8f54a46d6a294fed" namespace=k8s.io protocol=ttrpc version=3 May 13 03:39:28.140101 systemd[1]: Started cri-containerd-e67fc13ee684f0ea6f8da3cbbd4984a657977f62aa4d30c9244ea0cb6790816e.scope - libcontainer container e67fc13ee684f0ea6f8da3cbbd4984a657977f62aa4d30c9244ea0cb6790816e. May 13 03:39:28.162466 systemd[1]: Started cri-containerd-2c14a18e111e48a5ad454d6645630a8f7b475fda8b413df677e6a0c0e0266b3c.scope - libcontainer container 2c14a18e111e48a5ad454d6645630a8f7b475fda8b413df677e6a0c0e0266b3c. May 13 03:39:28.190378 systemd[1]: Started cri-containerd-9cdf8ebf7fb24eee9ec213bfc05fe5ac6e1b3c3572b9cce1a82cd46862ffe765.scope - libcontainer container 9cdf8ebf7fb24eee9ec213bfc05fe5ac6e1b3c3572b9cce1a82cd46862ffe765. May 13 03:39:28.233493 containerd[1485]: time="2025-05-13T03:39:28.232890952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal,Uid:3f62da39a5a082cfe6636cfb24bd792c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e67fc13ee684f0ea6f8da3cbbd4984a657977f62aa4d30c9244ea0cb6790816e\"" May 13 03:39:28.240507 containerd[1485]: time="2025-05-13T03:39:28.240120321Z" level=info msg="CreateContainer within sandbox \"e67fc13ee684f0ea6f8da3cbbd4984a657977f62aa4d30c9244ea0cb6790816e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 03:39:28.252864 containerd[1485]: time="2025-05-13T03:39:28.252819382Z" level=info msg="Container fa1d34c9fd4ddd3fbd37d7bb7616101396bee92cef2ee75a0f4bb9d2b8a91e7e: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:28.257452 containerd[1485]: time="2025-05-13T03:39:28.257425254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal,Uid:b7939c7d29c82be2e2ec7d16eb151d95,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c14a18e111e48a5ad454d6645630a8f7b475fda8b413df677e6a0c0e0266b3c\"" May 13 03:39:28.260972 containerd[1485]: time="2025-05-13T03:39:28.260945798Z" level=info msg="CreateContainer within sandbox \"2c14a18e111e48a5ad454d6645630a8f7b475fda8b413df677e6a0c0e0266b3c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 03:39:28.277071 containerd[1485]: time="2025-05-13T03:39:28.275779027Z" level=info msg="Container 8bb9bad6304d915f4209ea7db37b651f2d29446644b7d44a5723cdc0d1cfd1fa: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:28.277661 containerd[1485]: time="2025-05-13T03:39:28.277454023Z" level=info msg="CreateContainer within sandbox \"e67fc13ee684f0ea6f8da3cbbd4984a657977f62aa4d30c9244ea0cb6790816e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fa1d34c9fd4ddd3fbd37d7bb7616101396bee92cef2ee75a0f4bb9d2b8a91e7e\"" May 13 03:39:28.284888 containerd[1485]: time="2025-05-13T03:39:28.284840692Z" level=info msg="StartContainer for \"fa1d34c9fd4ddd3fbd37d7bb7616101396bee92cef2ee75a0f4bb9d2b8a91e7e\"" May 13 03:39:28.286410 kubelet[2346]: I0513 03:39:28.285300 2346 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:28.286410 kubelet[2346]: E0513 03:39:28.285612 2346 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.174:6443/api/v1/nodes\": dial tcp 172.24.4.174:6443: connect: connection refused" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:28.292426 containerd[1485]: time="2025-05-13T03:39:28.292388143Z" level=info msg="connecting to shim fa1d34c9fd4ddd3fbd37d7bb7616101396bee92cef2ee75a0f4bb9d2b8a91e7e" address="unix:///run/containerd/s/9c3c225d175cdef827bcd06671383b19af2f7a35a4ac21bba5d4aa6b93a9a54e" protocol=ttrpc version=3 May 13 03:39:28.297763 containerd[1485]: time="2025-05-13T03:39:28.297730601Z" level=info msg="CreateContainer within sandbox \"2c14a18e111e48a5ad454d6645630a8f7b475fda8b413df677e6a0c0e0266b3c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8bb9bad6304d915f4209ea7db37b651f2d29446644b7d44a5723cdc0d1cfd1fa\"" May 13 03:39:28.299626 containerd[1485]: time="2025-05-13T03:39:28.298384404Z" level=info msg="StartContainer for \"8bb9bad6304d915f4209ea7db37b651f2d29446644b7d44a5723cdc0d1cfd1fa\"" May 13 03:39:28.299626 containerd[1485]: time="2025-05-13T03:39:28.299533163Z" level=info msg="connecting to shim 8bb9bad6304d915f4209ea7db37b651f2d29446644b7d44a5723cdc0d1cfd1fa" address="unix:///run/containerd/s/6fe3dc96729bce56bc449f36f0470ccaa882a58589c6092dca5768dcc7c3f38e" protocol=ttrpc version=3 May 13 03:39:28.304503 containerd[1485]: time="2025-05-13T03:39:28.304472828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal,Uid:ef6ac5e663fbecbe7af44bb9f5e5693b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cdf8ebf7fb24eee9ec213bfc05fe5ac6e1b3c3572b9cce1a82cd46862ffe765\"" May 13 03:39:28.307895 containerd[1485]: time="2025-05-13T03:39:28.307712889Z" level=info msg="CreateContainer within sandbox \"9cdf8ebf7fb24eee9ec213bfc05fe5ac6e1b3c3572b9cce1a82cd46862ffe765\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 03:39:28.321534 systemd[1]: Started cri-containerd-fa1d34c9fd4ddd3fbd37d7bb7616101396bee92cef2ee75a0f4bb9d2b8a91e7e.scope - libcontainer container fa1d34c9fd4ddd3fbd37d7bb7616101396bee92cef2ee75a0f4bb9d2b8a91e7e. May 13 03:39:28.325711 containerd[1485]: time="2025-05-13T03:39:28.324977502Z" level=info msg="Container d2b58c612fbb8cfa98964b7794ef854114d8be25af7c10366ddbb3ca54f5945a: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:28.336397 systemd[1]: Started cri-containerd-8bb9bad6304d915f4209ea7db37b651f2d29446644b7d44a5723cdc0d1cfd1fa.scope - libcontainer container 8bb9bad6304d915f4209ea7db37b651f2d29446644b7d44a5723cdc0d1cfd1fa. May 13 03:39:28.339267 kubelet[2346]: W0513 03:39:28.339214 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused May 13 03:39:28.339389 kubelet[2346]: E0513 03:39:28.339284 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.174:6443: connect: connection refused" logger="UnhandledError" May 13 03:39:28.346168 containerd[1485]: time="2025-05-13T03:39:28.346134164Z" level=info msg="CreateContainer within sandbox \"9cdf8ebf7fb24eee9ec213bfc05fe5ac6e1b3c3572b9cce1a82cd46862ffe765\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d2b58c612fbb8cfa98964b7794ef854114d8be25af7c10366ddbb3ca54f5945a\"" May 13 03:39:28.346709 containerd[1485]: time="2025-05-13T03:39:28.346689694Z" level=info msg="StartContainer for \"d2b58c612fbb8cfa98964b7794ef854114d8be25af7c10366ddbb3ca54f5945a\"" May 13 03:39:28.349313 containerd[1485]: time="2025-05-13T03:39:28.349284148Z" level=info msg="connecting to shim d2b58c612fbb8cfa98964b7794ef854114d8be25af7c10366ddbb3ca54f5945a" address="unix:///run/containerd/s/d6f651c6f81a24ed4e6e599443e97a966660a62cd111d18d8f54a46d6a294fed" protocol=ttrpc version=3 May 13 03:39:28.364391 kubelet[2346]: W0513 03:39:28.364340 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused May 13 03:39:28.364639 kubelet[2346]: E0513 03:39:28.364543 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.174:6443: connect: connection refused" logger="UnhandledError" May 13 03:39:28.370574 systemd[1]: Started cri-containerd-d2b58c612fbb8cfa98964b7794ef854114d8be25af7c10366ddbb3ca54f5945a.scope - libcontainer container d2b58c612fbb8cfa98964b7794ef854114d8be25af7c10366ddbb3ca54f5945a. May 13 03:39:28.393310 kubelet[2346]: W0513 03:39:28.393045 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.174:6443: connect: connection refused May 13 03:39:28.393310 kubelet[2346]: E0513 03:39:28.393108 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.174:6443: connect: connection refused" logger="UnhandledError" May 13 03:39:28.428835 containerd[1485]: time="2025-05-13T03:39:28.428347976Z" level=info msg="StartContainer for \"fa1d34c9fd4ddd3fbd37d7bb7616101396bee92cef2ee75a0f4bb9d2b8a91e7e\" returns successfully" May 13 03:39:28.444709 containerd[1485]: time="2025-05-13T03:39:28.444379913Z" level=info msg="StartContainer for \"8bb9bad6304d915f4209ea7db37b651f2d29446644b7d44a5723cdc0d1cfd1fa\" returns successfully" May 13 03:39:28.485615 containerd[1485]: time="2025-05-13T03:39:28.484892029Z" level=info msg="StartContainer for \"d2b58c612fbb8cfa98964b7794ef854114d8be25af7c10366ddbb3ca54f5945a\" returns successfully" May 13 03:39:28.540016 kubelet[2346]: E0513 03:39:28.539663 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:28.545047 kubelet[2346]: E0513 03:39:28.544776 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:28.548392 kubelet[2346]: E0513 03:39:28.547986 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:29.088273 kubelet[2346]: I0513 03:39:29.087711 2346 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:29.555250 kubelet[2346]: E0513 03:39:29.553799 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:29.556159 kubelet[2346]: E0513 03:39:29.554844 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:30.552417 kubelet[2346]: E0513 03:39:30.552390 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-62b177a255.novalocal\" not found" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:30.610527 kubelet[2346]: I0513 03:39:30.610271 2346 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:30.698949 kubelet[2346]: I0513 03:39:30.698865 2346 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:30.709141 kubelet[2346]: E0513 03:39:30.708463 2346 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:30.709141 kubelet[2346]: I0513 03:39:30.708492 2346 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:30.712606 kubelet[2346]: E0513 03:39:30.710887 2346 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:30.712606 kubelet[2346]: I0513 03:39:30.710908 2346 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:30.716180 kubelet[2346]: E0513 03:39:30.716146 2346 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:31.454870 kubelet[2346]: I0513 03:39:31.454291 2346 apiserver.go:52] "Watching apiserver" May 13 03:39:31.491316 kubelet[2346]: I0513 03:39:31.491168 2346 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 03:39:32.450257 kubelet[2346]: I0513 03:39:32.448460 2346 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:32.457685 kubelet[2346]: W0513 03:39:32.457433 2346 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 03:39:33.026938 systemd[1]: Reload requested from client PID 2609 ('systemctl') (unit session-11.scope)... May 13 03:39:33.027581 systemd[1]: Reloading... May 13 03:39:33.152269 zram_generator::config[2655]: No configuration found. May 13 03:39:33.305089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 03:39:33.454325 systemd[1]: Reloading finished in 425 ms. May 13 03:39:33.484557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 03:39:33.491985 systemd[1]: kubelet.service: Deactivated successfully. May 13 03:39:33.492322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:39:33.492397 systemd[1]: kubelet.service: Consumed 1.218s CPU time, 127.5M memory peak. May 13 03:39:33.494552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 03:39:33.730334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 03:39:33.741211 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 03:39:33.922453 kubelet[2718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 03:39:33.922453 kubelet[2718]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 03:39:33.922453 kubelet[2718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 03:39:33.922453 kubelet[2718]: I0513 03:39:33.922192 2718 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 03:39:33.933854 kubelet[2718]: I0513 03:39:33.933786 2718 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 03:39:33.933854 kubelet[2718]: I0513 03:39:33.933813 2718 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 03:39:33.934160 kubelet[2718]: I0513 03:39:33.934133 2718 server.go:954] "Client rotation is on, will bootstrap in background" May 13 03:39:33.935614 kubelet[2718]: I0513 03:39:33.935580 2718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 03:39:33.938992 kubelet[2718]: I0513 03:39:33.938952 2718 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 03:39:33.951757 kubelet[2718]: I0513 03:39:33.950695 2718 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 03:39:33.956169 kubelet[2718]: I0513 03:39:33.956061 2718 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 03:39:33.956446 kubelet[2718]: I0513 03:39:33.956368 2718 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 03:39:33.956698 kubelet[2718]: I0513 03:39:33.956398 2718 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-62b177a255.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 03:39:33.956698 kubelet[2718]: I0513 03:39:33.956665 2718 topology_manager.go:138] "Creating topology manager with none policy" May 13 03:39:33.956698 kubelet[2718]: I0513 03:39:33.956702 2718 container_manager_linux.go:304] "Creating device plugin manager" May 13 03:39:33.957176 kubelet[2718]: I0513 03:39:33.956740 2718 state_mem.go:36] "Initialized new in-memory state store" May 13 03:39:33.962096 kubelet[2718]: I0513 03:39:33.957831 2718 kubelet.go:446] "Attempting to sync node with API server" May 13 03:39:33.962096 kubelet[2718]: I0513 03:39:33.957853 2718 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 03:39:33.962096 kubelet[2718]: I0513 03:39:33.957875 2718 kubelet.go:352] "Adding apiserver pod source" May 13 03:39:33.962096 kubelet[2718]: I0513 03:39:33.959284 2718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 03:39:33.979640 kubelet[2718]: I0513 03:39:33.979417 2718 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 03:39:33.980786 kubelet[2718]: I0513 03:39:33.980544 2718 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 03:39:33.981177 kubelet[2718]: I0513 03:39:33.981032 2718 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 03:39:33.981177 kubelet[2718]: I0513 03:39:33.981066 2718 server.go:1287] "Started kubelet" May 13 03:39:33.994504 kubelet[2718]: I0513 03:39:33.994463 2718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 03:39:33.995617 kubelet[2718]: I0513 03:39:33.995521 2718 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 03:39:33.995837 kubelet[2718]: I0513 03:39:33.995815 2718 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 03:39:33.997021 kubelet[2718]: I0513 03:39:33.996996 2718 server.go:490] "Adding debug handlers to kubelet server" May 13 03:39:33.998323 kubelet[2718]: I0513 03:39:33.998284 2718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 03:39:33.999349 kubelet[2718]: I0513 03:39:33.999335 2718 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 03:39:34.002338 kubelet[2718]: I0513 03:39:34.002324 2718 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 03:39:34.002527 kubelet[2718]: I0513 03:39:34.002515 2718 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 03:39:34.002687 kubelet[2718]: I0513 03:39:34.002676 2718 reconciler.go:26] "Reconciler: start to sync state" May 13 03:39:34.005073 kubelet[2718]: E0513 03:39:34.005053 2718 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 03:39:34.005191 kubelet[2718]: I0513 03:39:34.005167 2718 factory.go:221] Registration of the systemd container factory successfully May 13 03:39:34.005360 kubelet[2718]: I0513 03:39:34.005327 2718 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 03:39:34.009062 kubelet[2718]: I0513 03:39:34.009038 2718 factory.go:221] Registration of the containerd container factory successfully May 13 03:39:34.012426 kubelet[2718]: I0513 03:39:34.012350 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 03:39:34.014305 kubelet[2718]: I0513 03:39:34.014114 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 03:39:34.014305 kubelet[2718]: I0513 03:39:34.014143 2718 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 03:39:34.014305 kubelet[2718]: I0513 03:39:34.014165 2718 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 03:39:34.014305 kubelet[2718]: I0513 03:39:34.014176 2718 kubelet.go:2388] "Starting kubelet main sync loop" May 13 03:39:34.014305 kubelet[2718]: E0513 03:39:34.014215 2718 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 03:39:34.031429 sudo[2748]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 03:39:34.031847 sudo[2748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 03:39:34.075453 kubelet[2718]: I0513 03:39:34.075432 2718 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 03:39:34.075605 kubelet[2718]: I0513 03:39:34.075594 2718 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 03:39:34.075685 kubelet[2718]: I0513 03:39:34.075677 2718 state_mem.go:36] "Initialized new in-memory state store" May 13 03:39:34.076007 kubelet[2718]: I0513 03:39:34.075936 2718 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 03:39:34.076007 kubelet[2718]: I0513 03:39:34.075951 2718 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 03:39:34.076007 kubelet[2718]: I0513 03:39:34.075971 2718 policy_none.go:49] "None policy: Start" May 13 03:39:34.076007 kubelet[2718]: I0513 03:39:34.075981 2718 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 03:39:34.076221 kubelet[2718]: I0513 03:39:34.075991 2718 state_mem.go:35] "Initializing new in-memory state store" May 13 03:39:34.076461 kubelet[2718]: I0513 03:39:34.076451 2718 state_mem.go:75] "Updated machine memory state" May 13 03:39:34.083402 kubelet[2718]: I0513 03:39:34.082902 2718 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 03:39:34.083402 kubelet[2718]: I0513 03:39:34.083043 2718 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 03:39:34.083402 kubelet[2718]: I0513 03:39:34.083055 2718 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 03:39:34.086179 kubelet[2718]: I0513 03:39:34.086163 2718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 03:39:34.088658 kubelet[2718]: E0513 03:39:34.087935 2718 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 03:39:34.114780 kubelet[2718]: I0513 03:39:34.114695 2718 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.115649 kubelet[2718]: I0513 03:39:34.115132 2718 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.116442 kubelet[2718]: I0513 03:39:34.115272 2718 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.124874 kubelet[2718]: W0513 03:39:34.124850 2718 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 03:39:34.127092 kubelet[2718]: W0513 03:39:34.126945 2718 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 03:39:34.128347 kubelet[2718]: W0513 03:39:34.128300 2718 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 03:39:34.128573 kubelet[2718]: E0513 03:39:34.128492 2718 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.194499 kubelet[2718]: I0513 03:39:34.194437 2718 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.204490 kubelet[2718]: I0513 03:39:34.204059 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f62da39a5a082cfe6636cfb24bd792c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"3f62da39a5a082cfe6636cfb24bd792c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.204490 kubelet[2718]: I0513 03:39:34.204098 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.204490 kubelet[2718]: I0513 03:39:34.204119 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.204490 kubelet[2718]: I0513 03:39:34.204143 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef6ac5e663fbecbe7af44bb9f5e5693b-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"ef6ac5e663fbecbe7af44bb9f5e5693b\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.204698 kubelet[2718]: I0513 03:39:34.204161 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f62da39a5a082cfe6636cfb24bd792c-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"3f62da39a5a082cfe6636cfb24bd792c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.204698 kubelet[2718]: I0513 03:39:34.204185 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f62da39a5a082cfe6636cfb24bd792c-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"3f62da39a5a082cfe6636cfb24bd792c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.204698 kubelet[2718]: I0513 03:39:34.204213 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.204698 kubelet[2718]: I0513 03:39:34.204247 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.204797 kubelet[2718]: I0513 03:39:34.204269 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7939c7d29c82be2e2ec7d16eb151d95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal\" (UID: \"b7939c7d29c82be2e2ec7d16eb151d95\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.209011 kubelet[2718]: I0513 03:39:34.208742 2718 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.209011 kubelet[2718]: I0513 03:39:34.208808 2718 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:34.633566 sudo[2748]: pam_unix(sudo:session): session closed for user root May 13 03:39:34.961289 kubelet[2718]: I0513 03:39:34.960670 2718 apiserver.go:52] "Watching apiserver" May 13 03:39:35.003176 kubelet[2718]: I0513 03:39:35.003128 2718 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 03:39:35.053247 kubelet[2718]: I0513 03:39:35.050914 2718 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:35.053247 kubelet[2718]: I0513 03:39:35.051074 2718 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:35.075702 kubelet[2718]: W0513 03:39:35.075524 2718 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 03:39:35.076425 kubelet[2718]: E0513 03:39:35.076352 2718 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:35.080059 kubelet[2718]: W0513 03:39:35.080026 2718 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 03:39:35.080172 kubelet[2718]: E0513 03:39:35.080140 2718 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal" May 13 03:39:35.135312 kubelet[2718]: I0513 03:39:35.135260 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-62b177a255.novalocal" podStartSLOduration=1.13522565 podStartE2EDuration="1.13522565s" podCreationTimestamp="2025-05-13 03:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 03:39:35.117923395 +0000 UTC m=+1.371543051" watchObservedRunningTime="2025-05-13 03:39:35.13522565 +0000 UTC m=+1.388845256" May 13 03:39:35.150250 kubelet[2718]: I0513 03:39:35.149389 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-62b177a255.novalocal" podStartSLOduration=3.149371402 podStartE2EDuration="3.149371402s" podCreationTimestamp="2025-05-13 03:39:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 03:39:35.135713297 +0000 UTC m=+1.389332903" watchObservedRunningTime="2025-05-13 03:39:35.149371402 +0000 UTC m=+1.402991018" May 13 03:39:35.163194 kubelet[2718]: I0513 03:39:35.163148 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-62b177a255.novalocal" podStartSLOduration=1.163131765 podStartE2EDuration="1.163131765s" podCreationTimestamp="2025-05-13 03:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 03:39:35.150665281 +0000 UTC m=+1.404284887" watchObservedRunningTime="2025-05-13 03:39:35.163131765 +0000 UTC m=+1.416751371" May 13 03:39:37.566738 sudo[1766]: pam_unix(sudo:session): session closed for user root May 13 03:39:37.850843 sshd[1765]: Connection closed by 172.24.4.1 port 44290 May 13 03:39:37.852387 sshd-session[1762]: pam_unix(sshd:session): session closed for user core May 13 03:39:37.865532 systemd[1]: sshd@8-172.24.4.174:22-172.24.4.1:44290.service: Deactivated successfully. May 13 03:39:37.872523 systemd[1]: session-11.scope: Deactivated successfully. May 13 03:39:37.874466 systemd[1]: session-11.scope: Consumed 7.880s CPU time, 266.2M memory peak. May 13 03:39:37.876472 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. May 13 03:39:37.877991 systemd-logind[1468]: Removed session 11. May 13 03:39:38.016984 kubelet[2718]: I0513 03:39:38.016787 2718 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 03:39:38.017557 containerd[1485]: time="2025-05-13T03:39:38.017492185Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 03:39:38.018060 kubelet[2718]: I0513 03:39:38.017788 2718 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 03:39:38.093168 kubelet[2718]: W0513 03:39:38.093133 2718 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4284-0-0-n-62b177a255.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object May 13 03:39:38.093519 kubelet[2718]: E0513 03:39:38.093319 2718 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4284-0-0-n-62b177a255.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object" logger="UnhandledError" May 13 03:39:38.093519 kubelet[2718]: W0513 03:39:38.093391 2718 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4284-0-0-n-62b177a255.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object May 13 03:39:38.093519 kubelet[2718]: E0513 03:39:38.093406 2718 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4284-0-0-n-62b177a255.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object" logger="UnhandledError" May 13 03:39:38.093519 kubelet[2718]: W0513 03:39:38.093451 2718 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4284-0-0-n-62b177a255.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object May 13 03:39:38.093654 kubelet[2718]: E0513 03:39:38.093465 2718 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4284-0-0-n-62b177a255.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object" logger="UnhandledError" May 13 03:39:38.096871 systemd[1]: Created slice kubepods-besteffort-pod8df9be7b_c655_4446_be99_98ffc414911e.slice - libcontainer container kubepods-besteffort-pod8df9be7b_c655_4446_be99_98ffc414911e.slice. May 13 03:39:38.122037 systemd[1]: Created slice kubepods-burstable-pod9fe07975_15be_419c_b043_80900aae2184.slice - libcontainer container kubepods-burstable-pod9fe07975_15be_419c_b043_80900aae2184.slice. May 13 03:39:38.134559 kubelet[2718]: I0513 03:39:38.134528 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-etc-cni-netd\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137261 kubelet[2718]: I0513 03:39:38.134755 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fe07975-15be-419c-b043-80900aae2184-cilium-config-path\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137261 kubelet[2718]: I0513 03:39:38.134782 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cilium-run\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137261 kubelet[2718]: I0513 03:39:38.134836 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-xtables-lock\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137261 kubelet[2718]: I0513 03:39:38.134858 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fe07975-15be-419c-b043-80900aae2184-clustermesh-secrets\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137261 kubelet[2718]: I0513 03:39:38.134891 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8df9be7b-c655-4446-be99-98ffc414911e-xtables-lock\") pod \"kube-proxy-lk9k6\" (UID: \"8df9be7b-c655-4446-be99-98ffc414911e\") " pod="kube-system/kube-proxy-lk9k6" May 13 03:39:38.137261 kubelet[2718]: I0513 03:39:38.134914 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8df9be7b-c655-4446-be99-98ffc414911e-lib-modules\") pod \"kube-proxy-lk9k6\" (UID: \"8df9be7b-c655-4446-be99-98ffc414911e\") " pod="kube-system/kube-proxy-lk9k6" May 13 03:39:38.137543 kubelet[2718]: I0513 03:39:38.134934 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-host-proc-sys-kernel\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137543 kubelet[2718]: I0513 03:39:38.134960 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-bpf-maps\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137543 kubelet[2718]: I0513 03:39:38.134980 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-host-proc-sys-net\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137543 kubelet[2718]: I0513 03:39:38.135000 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d2gp\" (UniqueName: \"kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-kube-api-access-5d2gp\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137543 kubelet[2718]: I0513 03:39:38.135019 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-hostproc\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137543 kubelet[2718]: I0513 03:39:38.135039 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cilium-cgroup\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137689 kubelet[2718]: I0513 03:39:38.135057 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cni-path\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137689 kubelet[2718]: I0513 03:39:38.135079 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfldf\" (UniqueName: \"kubernetes.io/projected/8df9be7b-c655-4446-be99-98ffc414911e-kube-api-access-lfldf\") pod \"kube-proxy-lk9k6\" (UID: \"8df9be7b-c655-4446-be99-98ffc414911e\") " pod="kube-system/kube-proxy-lk9k6" May 13 03:39:38.137689 kubelet[2718]: I0513 03:39:38.135098 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8df9be7b-c655-4446-be99-98ffc414911e-kube-proxy\") pod \"kube-proxy-lk9k6\" (UID: \"8df9be7b-c655-4446-be99-98ffc414911e\") " pod="kube-system/kube-proxy-lk9k6" May 13 03:39:38.137689 kubelet[2718]: I0513 03:39:38.135114 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-lib-modules\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.137689 kubelet[2718]: I0513 03:39:38.135133 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-hubble-tls\") pod \"cilium-g6zmv\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " pod="kube-system/cilium-g6zmv" May 13 03:39:38.359317 kubelet[2718]: E0513 03:39:38.359218 2718 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 03:39:38.359317 kubelet[2718]: E0513 03:39:38.359312 2718 projected.go:194] Error preparing data for projected volume kube-api-access-lfldf for pod kube-system/kube-proxy-lk9k6: configmap "kube-root-ca.crt" not found May 13 03:39:38.360314 kubelet[2718]: E0513 03:39:38.360266 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8df9be7b-c655-4446-be99-98ffc414911e-kube-api-access-lfldf podName:8df9be7b-c655-4446-be99-98ffc414911e nodeName:}" failed. No retries permitted until 2025-05-13 03:39:38.86018699 +0000 UTC m=+5.113806656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lfldf" (UniqueName: "kubernetes.io/projected/8df9be7b-c655-4446-be99-98ffc414911e-kube-api-access-lfldf") pod "kube-proxy-lk9k6" (UID: "8df9be7b-c655-4446-be99-98ffc414911e") : configmap "kube-root-ca.crt" not found May 13 03:39:38.368211 kubelet[2718]: E0513 03:39:38.368147 2718 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 03:39:38.368443 kubelet[2718]: E0513 03:39:38.368270 2718 projected.go:194] Error preparing data for projected volume kube-api-access-5d2gp for pod kube-system/cilium-g6zmv: configmap "kube-root-ca.crt" not found May 13 03:39:38.369302 kubelet[2718]: E0513 03:39:38.369017 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-kube-api-access-5d2gp podName:9fe07975-15be-419c-b043-80900aae2184 nodeName:}" failed. No retries permitted until 2025-05-13 03:39:38.868400465 +0000 UTC m=+5.122020121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5d2gp" (UniqueName: "kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-kube-api-access-5d2gp") pod "cilium-g6zmv" (UID: "9fe07975-15be-419c-b043-80900aae2184") : configmap "kube-root-ca.crt" not found May 13 03:39:39.013349 containerd[1485]: time="2025-05-13T03:39:39.011475743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lk9k6,Uid:8df9be7b-c655-4446-be99-98ffc414911e,Namespace:kube-system,Attempt:0,}" May 13 03:39:39.046657 kubelet[2718]: I0513 03:39:39.042731 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/192a1f39-6d73-48d5-88ae-2618d67d348d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-69cxp\" (UID: \"192a1f39-6d73-48d5-88ae-2618d67d348d\") " pod="kube-system/cilium-operator-6c4d7847fc-69cxp" May 13 03:39:39.046657 kubelet[2718]: I0513 03:39:39.042802 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9ls\" (UniqueName: \"kubernetes.io/projected/192a1f39-6d73-48d5-88ae-2618d67d348d-kube-api-access-dc9ls\") pod \"cilium-operator-6c4d7847fc-69cxp\" (UID: \"192a1f39-6d73-48d5-88ae-2618d67d348d\") " pod="kube-system/cilium-operator-6c4d7847fc-69cxp" May 13 03:39:39.064100 systemd[1]: Created slice kubepods-besteffort-pod192a1f39_6d73_48d5_88ae_2618d67d348d.slice - libcontainer container kubepods-besteffort-pod192a1f39_6d73_48d5_88ae_2618d67d348d.slice. May 13 03:39:39.076162 containerd[1485]: time="2025-05-13T03:39:39.075945517Z" level=info msg="connecting to shim eeafcc0301cb35c2f815ba4e693a181c215cc3957b6b8ca1f7b1122a7dc9ae05" address="unix:///run/containerd/s/edc4c52a210989502d3d9eda4101efb461380e29f904e3988b2b77053d5ab592" namespace=k8s.io protocol=ttrpc version=3 May 13 03:39:39.122409 systemd[1]: Started cri-containerd-eeafcc0301cb35c2f815ba4e693a181c215cc3957b6b8ca1f7b1122a7dc9ae05.scope - libcontainer container eeafcc0301cb35c2f815ba4e693a181c215cc3957b6b8ca1f7b1122a7dc9ae05. May 13 03:39:39.157532 containerd[1485]: time="2025-05-13T03:39:39.157500328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lk9k6,Uid:8df9be7b-c655-4446-be99-98ffc414911e,Namespace:kube-system,Attempt:0,} returns sandbox id \"eeafcc0301cb35c2f815ba4e693a181c215cc3957b6b8ca1f7b1122a7dc9ae05\"" May 13 03:39:39.163583 containerd[1485]: time="2025-05-13T03:39:39.163526797Z" level=info msg="CreateContainer within sandbox \"eeafcc0301cb35c2f815ba4e693a181c215cc3957b6b8ca1f7b1122a7dc9ae05\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 03:39:39.187000 containerd[1485]: time="2025-05-13T03:39:39.184396809Z" level=info msg="Container 839dce74b514fce5cebd3010a43c94a0856e0722f07d274627bfb2ef50f6c136: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:39.199996 containerd[1485]: time="2025-05-13T03:39:39.199770043Z" level=info msg="CreateContainer within sandbox \"eeafcc0301cb35c2f815ba4e693a181c215cc3957b6b8ca1f7b1122a7dc9ae05\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"839dce74b514fce5cebd3010a43c94a0856e0722f07d274627bfb2ef50f6c136\"" May 13 03:39:39.201304 containerd[1485]: time="2025-05-13T03:39:39.201281731Z" level=info msg="StartContainer for \"839dce74b514fce5cebd3010a43c94a0856e0722f07d274627bfb2ef50f6c136\"" May 13 03:39:39.203219 containerd[1485]: time="2025-05-13T03:39:39.203197749Z" level=info msg="connecting to shim 839dce74b514fce5cebd3010a43c94a0856e0722f07d274627bfb2ef50f6c136" address="unix:///run/containerd/s/edc4c52a210989502d3d9eda4101efb461380e29f904e3988b2b77053d5ab592" protocol=ttrpc version=3 May 13 03:39:39.231507 systemd[1]: Started cri-containerd-839dce74b514fce5cebd3010a43c94a0856e0722f07d274627bfb2ef50f6c136.scope - libcontainer container 839dce74b514fce5cebd3010a43c94a0856e0722f07d274627bfb2ef50f6c136. May 13 03:39:39.288215 containerd[1485]: time="2025-05-13T03:39:39.288088483Z" level=info msg="StartContainer for \"839dce74b514fce5cebd3010a43c94a0856e0722f07d274627bfb2ef50f6c136\" returns successfully" May 13 03:39:39.332601 containerd[1485]: time="2025-05-13T03:39:39.332125059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6zmv,Uid:9fe07975-15be-419c-b043-80900aae2184,Namespace:kube-system,Attempt:0,}" May 13 03:39:39.365087 containerd[1485]: time="2025-05-13T03:39:39.365042595Z" level=info msg="connecting to shim d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19" address="unix:///run/containerd/s/abf6aa3d768b89560490f45a2adc7ab6856b66fcd283774fb29c57d3fcf1bc97" namespace=k8s.io protocol=ttrpc version=3 May 13 03:39:39.371645 containerd[1485]: time="2025-05-13T03:39:39.371610661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-69cxp,Uid:192a1f39-6d73-48d5-88ae-2618d67d348d,Namespace:kube-system,Attempt:0,}" May 13 03:39:39.390579 systemd[1]: Started cri-containerd-d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19.scope - libcontainer container d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19. May 13 03:39:39.410476 containerd[1485]: time="2025-05-13T03:39:39.410386626Z" level=info msg="connecting to shim e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338" address="unix:///run/containerd/s/ae90af1b4381eb67e94d1dd0663422141dfaf132b31cbcffe269bbe95553c697" namespace=k8s.io protocol=ttrpc version=3 May 13 03:39:39.451595 systemd[1]: Started cri-containerd-e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338.scope - libcontainer container e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338. May 13 03:39:39.456852 containerd[1485]: time="2025-05-13T03:39:39.455663590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6zmv,Uid:9fe07975-15be-419c-b043-80900aae2184,Namespace:kube-system,Attempt:0,} returns sandbox id \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\"" May 13 03:39:39.458024 containerd[1485]: time="2025-05-13T03:39:39.457994180Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 03:39:39.510792 containerd[1485]: time="2025-05-13T03:39:39.510709219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-69cxp,Uid:192a1f39-6d73-48d5-88ae-2618d67d348d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\"" May 13 03:39:39.970574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803796711.mount: Deactivated successfully. May 13 03:39:44.155197 kubelet[2718]: I0513 03:39:44.154989 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lk9k6" podStartSLOduration=6.154967514 podStartE2EDuration="6.154967514s" podCreationTimestamp="2025-05-13 03:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 03:39:40.088651653 +0000 UTC m=+6.342271309" watchObservedRunningTime="2025-05-13 03:39:44.154967514 +0000 UTC m=+10.408587120" May 13 03:39:44.600124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990061532.mount: Deactivated successfully. May 13 03:39:46.931209 containerd[1485]: time="2025-05-13T03:39:46.931163640Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:46.932818 containerd[1485]: time="2025-05-13T03:39:46.932697846Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 03:39:46.934482 containerd[1485]: time="2025-05-13T03:39:46.934441951Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:46.937772 containerd[1485]: time="2025-05-13T03:39:46.937735628Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.479704795s" May 13 03:39:46.937846 containerd[1485]: time="2025-05-13T03:39:46.937771436Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 03:39:46.940067 containerd[1485]: time="2025-05-13T03:39:46.940031246Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 03:39:46.942640 containerd[1485]: time="2025-05-13T03:39:46.942142943Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 03:39:46.964792 containerd[1485]: time="2025-05-13T03:39:46.964744745Z" level=info msg="Container be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:46.982302 containerd[1485]: time="2025-05-13T03:39:46.982148255Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\"" May 13 03:39:46.983346 containerd[1485]: time="2025-05-13T03:39:46.983318208Z" level=info msg="StartContainer for \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\"" May 13 03:39:46.984319 containerd[1485]: time="2025-05-13T03:39:46.984293024Z" level=info msg="connecting to shim be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e" address="unix:///run/containerd/s/abf6aa3d768b89560490f45a2adc7ab6856b66fcd283774fb29c57d3fcf1bc97" protocol=ttrpc version=3 May 13 03:39:47.010380 systemd[1]: Started cri-containerd-be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e.scope - libcontainer container be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e. May 13 03:39:47.044022 containerd[1485]: time="2025-05-13T03:39:47.043971166Z" level=info msg="StartContainer for \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\" returns successfully" May 13 03:39:47.058824 systemd[1]: cri-containerd-be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e.scope: Deactivated successfully. May 13 03:39:47.060301 systemd[1]: cri-containerd-be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e.scope: Consumed 25ms CPU time, 6.5M memory peak, 2.1M written to disk. May 13 03:39:47.063953 containerd[1485]: time="2025-05-13T03:39:47.063771573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\" id:\"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\" pid:3125 exited_at:{seconds:1747107587 nanos:63384666}" May 13 03:39:47.063953 containerd[1485]: time="2025-05-13T03:39:47.063862361Z" level=info msg="received exit event container_id:\"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\" id:\"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\" pid:3125 exited_at:{seconds:1747107587 nanos:63384666}" May 13 03:39:47.964900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e-rootfs.mount: Deactivated successfully. May 13 03:39:49.116334 containerd[1485]: time="2025-05-13T03:39:49.114229089Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 03:39:49.136888 containerd[1485]: time="2025-05-13T03:39:49.136763866Z" level=info msg="Container 2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:49.157336 containerd[1485]: time="2025-05-13T03:39:49.157155049Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\"" May 13 03:39:49.158935 containerd[1485]: time="2025-05-13T03:39:49.158870658Z" level=info msg="StartContainer for \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\"" May 13 03:39:49.162282 containerd[1485]: time="2025-05-13T03:39:49.161935353Z" level=info msg="connecting to shim 2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439" address="unix:///run/containerd/s/abf6aa3d768b89560490f45a2adc7ab6856b66fcd283774fb29c57d3fcf1bc97" protocol=ttrpc version=3 May 13 03:39:49.203461 systemd[1]: Started cri-containerd-2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439.scope - libcontainer container 2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439. May 13 03:39:49.246178 containerd[1485]: time="2025-05-13T03:39:49.246118132Z" level=info msg="StartContainer for \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\" returns successfully" May 13 03:39:49.258922 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 03:39:49.259584 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 03:39:49.259930 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 03:39:49.263536 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 03:39:49.267349 containerd[1485]: time="2025-05-13T03:39:49.267034919Z" level=info msg="received exit event container_id:\"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\" id:\"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\" pid:3169 exited_at:{seconds:1747107589 nanos:266858982}" May 13 03:39:49.267502 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 03:39:49.268041 systemd[1]: cri-containerd-2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439.scope: Deactivated successfully. May 13 03:39:49.272461 containerd[1485]: time="2025-05-13T03:39:49.272411428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\" id:\"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\" pid:3169 exited_at:{seconds:1747107589 nanos:266858982}" May 13 03:39:49.294116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 03:39:50.115542 containerd[1485]: time="2025-05-13T03:39:50.115506444Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 03:39:50.134985 containerd[1485]: time="2025-05-13T03:39:50.131770075Z" level=info msg="Container 73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:50.137792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439-rootfs.mount: Deactivated successfully. May 13 03:39:50.159289 containerd[1485]: time="2025-05-13T03:39:50.159136862Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\"" May 13 03:39:50.161411 containerd[1485]: time="2025-05-13T03:39:50.161385433Z" level=info msg="StartContainer for \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\"" May 13 03:39:50.162922 containerd[1485]: time="2025-05-13T03:39:50.162898751Z" level=info msg="connecting to shim 73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd" address="unix:///run/containerd/s/abf6aa3d768b89560490f45a2adc7ab6856b66fcd283774fb29c57d3fcf1bc97" protocol=ttrpc version=3 May 13 03:39:50.191403 systemd[1]: Started cri-containerd-73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd.scope - libcontainer container 73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd. May 13 03:39:50.239072 systemd[1]: cri-containerd-73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd.scope: Deactivated successfully. May 13 03:39:50.246032 containerd[1485]: time="2025-05-13T03:39:50.246001204Z" level=info msg="received exit event container_id:\"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\" id:\"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\" pid:3227 exited_at:{seconds:1747107590 nanos:243918617}" May 13 03:39:50.248287 containerd[1485]: time="2025-05-13T03:39:50.246793018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\" id:\"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\" pid:3227 exited_at:{seconds:1747107590 nanos:243918617}" May 13 03:39:50.248366 containerd[1485]: time="2025-05-13T03:39:50.246895025Z" level=info msg="StartContainer for \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\" returns successfully" May 13 03:39:50.288478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd-rootfs.mount: Deactivated successfully. May 13 03:39:50.717930 containerd[1485]: time="2025-05-13T03:39:50.717754705Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:50.718929 containerd[1485]: time="2025-05-13T03:39:50.718868593Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 03:39:50.720250 containerd[1485]: time="2025-05-13T03:39:50.720182848Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 03:39:50.721639 containerd[1485]: time="2025-05-13T03:39:50.721527717Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.78145464s" May 13 03:39:50.721639 containerd[1485]: time="2025-05-13T03:39:50.721564565Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 03:39:50.724310 containerd[1485]: time="2025-05-13T03:39:50.723785458Z" level=info msg="CreateContainer within sandbox \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 03:39:50.734726 containerd[1485]: time="2025-05-13T03:39:50.734688881Z" level=info msg="Container 66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:50.739354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374206703.mount: Deactivated successfully. May 13 03:39:50.755298 containerd[1485]: time="2025-05-13T03:39:50.754211931Z" level=info msg="CreateContainer within sandbox \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\"" May 13 03:39:50.759242 containerd[1485]: time="2025-05-13T03:39:50.758723604Z" level=info msg="StartContainer for \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\"" May 13 03:39:50.759791 containerd[1485]: time="2025-05-13T03:39:50.759734473Z" level=info msg="connecting to shim 66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7" address="unix:///run/containerd/s/ae90af1b4381eb67e94d1dd0663422141dfaf132b31cbcffe269bbe95553c697" protocol=ttrpc version=3 May 13 03:39:50.782385 systemd[1]: Started cri-containerd-66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7.scope - libcontainer container 66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7. May 13 03:39:50.822787 containerd[1485]: time="2025-05-13T03:39:50.822659044Z" level=info msg="StartContainer for \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" returns successfully" May 13 03:39:51.122068 containerd[1485]: time="2025-05-13T03:39:51.120901332Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 03:39:51.142275 containerd[1485]: time="2025-05-13T03:39:51.141593644Z" level=info msg="Container 0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:51.152905 containerd[1485]: time="2025-05-13T03:39:51.152870078Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\"" May 13 03:39:51.154088 containerd[1485]: time="2025-05-13T03:39:51.153609270Z" level=info msg="StartContainer for \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\"" May 13 03:39:51.154701 containerd[1485]: time="2025-05-13T03:39:51.154646534Z" level=info msg="connecting to shim 0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0" address="unix:///run/containerd/s/abf6aa3d768b89560490f45a2adc7ab6856b66fcd283774fb29c57d3fcf1bc97" protocol=ttrpc version=3 May 13 03:39:51.192414 systemd[1]: Started cri-containerd-0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0.scope - libcontainer container 0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0. May 13 03:39:51.196569 kubelet[2718]: I0513 03:39:51.196514 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-69cxp" podStartSLOduration=0.985946119 podStartE2EDuration="12.196494127s" podCreationTimestamp="2025-05-13 03:39:39 +0000 UTC" firstStartedPulling="2025-05-13 03:39:39.511964623 +0000 UTC m=+5.765584239" lastFinishedPulling="2025-05-13 03:39:50.722512641 +0000 UTC m=+16.976132247" observedRunningTime="2025-05-13 03:39:51.194656431 +0000 UTC m=+17.448276038" watchObservedRunningTime="2025-05-13 03:39:51.196494127 +0000 UTC m=+17.450113743" May 13 03:39:51.247589 systemd[1]: cri-containerd-0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0.scope: Deactivated successfully. May 13 03:39:51.248992 containerd[1485]: time="2025-05-13T03:39:51.247574263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\" id:\"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\" pid:3306 exited_at:{seconds:1747107591 nanos:245766450}" May 13 03:39:51.249664 containerd[1485]: time="2025-05-13T03:39:51.249549009Z" level=info msg="received exit event container_id:\"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\" id:\"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\" pid:3306 exited_at:{seconds:1747107591 nanos:245766450}" May 13 03:39:51.262466 containerd[1485]: time="2025-05-13T03:39:51.262303677Z" level=info msg="StartContainer for \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\" returns successfully" May 13 03:39:51.288983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0-rootfs.mount: Deactivated successfully. May 13 03:39:52.143993 containerd[1485]: time="2025-05-13T03:39:52.143888524Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 03:39:52.181298 containerd[1485]: time="2025-05-13T03:39:52.178448296Z" level=info msg="Container efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857: CDI devices from CRI Config.CDIDevices: []" May 13 03:39:52.203322 containerd[1485]: time="2025-05-13T03:39:52.203200218Z" level=info msg="CreateContainer within sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\"" May 13 03:39:52.205266 containerd[1485]: time="2025-05-13T03:39:52.203947169Z" level=info msg="StartContainer for \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\"" May 13 03:39:52.205266 containerd[1485]: time="2025-05-13T03:39:52.204824946Z" level=info msg="connecting to shim efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857" address="unix:///run/containerd/s/abf6aa3d768b89560490f45a2adc7ab6856b66fcd283774fb29c57d3fcf1bc97" protocol=ttrpc version=3 May 13 03:39:52.229387 systemd[1]: Started cri-containerd-efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857.scope - libcontainer container efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857. May 13 03:39:52.281145 containerd[1485]: time="2025-05-13T03:39:52.281084218Z" level=info msg="StartContainer for \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" returns successfully" May 13 03:39:52.369495 containerd[1485]: time="2025-05-13T03:39:52.369435935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" id:\"af3493f8c293ac58c6f45084a1b675140026d1131dcb4275c5addac0af4a6e7a\" pid:3373 exited_at:{seconds:1747107592 nanos:369161688}" May 13 03:39:52.470132 kubelet[2718]: I0513 03:39:52.470003 2718 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 03:39:52.588989 systemd[1]: Created slice kubepods-burstable-pod2c0b8163_fd13_47da_b85f_9cca371fb6ab.slice - libcontainer container kubepods-burstable-pod2c0b8163_fd13_47da_b85f_9cca371fb6ab.slice. May 13 03:39:52.602058 systemd[1]: Created slice kubepods-burstable-pod7c2362e3_2e7c_4b5a_bc52_8a2deafc3462.slice - libcontainer container kubepods-burstable-pod7c2362e3_2e7c_4b5a_bc52_8a2deafc3462.slice. May 13 03:39:52.644155 kubelet[2718]: I0513 03:39:52.644107 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c0b8163-fd13-47da-b85f-9cca371fb6ab-config-volume\") pod \"coredns-668d6bf9bc-jdwd6\" (UID: \"2c0b8163-fd13-47da-b85f-9cca371fb6ab\") " pod="kube-system/coredns-668d6bf9bc-jdwd6" May 13 03:39:52.644660 kubelet[2718]: I0513 03:39:52.644642 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fms87\" (UniqueName: \"kubernetes.io/projected/2c0b8163-fd13-47da-b85f-9cca371fb6ab-kube-api-access-fms87\") pod \"coredns-668d6bf9bc-jdwd6\" (UID: \"2c0b8163-fd13-47da-b85f-9cca371fb6ab\") " pod="kube-system/coredns-668d6bf9bc-jdwd6" May 13 03:39:52.644775 kubelet[2718]: I0513 03:39:52.644761 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x46q\" (UniqueName: \"kubernetes.io/projected/7c2362e3-2e7c-4b5a-bc52-8a2deafc3462-kube-api-access-2x46q\") pod \"coredns-668d6bf9bc-kcdzs\" (UID: \"7c2362e3-2e7c-4b5a-bc52-8a2deafc3462\") " pod="kube-system/coredns-668d6bf9bc-kcdzs" May 13 03:39:52.644940 kubelet[2718]: I0513 03:39:52.644909 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c2362e3-2e7c-4b5a-bc52-8a2deafc3462-config-volume\") pod \"coredns-668d6bf9bc-kcdzs\" (UID: \"7c2362e3-2e7c-4b5a-bc52-8a2deafc3462\") " pod="kube-system/coredns-668d6bf9bc-kcdzs" May 13 03:39:52.894531 containerd[1485]: time="2025-05-13T03:39:52.894495004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jdwd6,Uid:2c0b8163-fd13-47da-b85f-9cca371fb6ab,Namespace:kube-system,Attempt:0,}" May 13 03:39:52.907276 containerd[1485]: time="2025-05-13T03:39:52.907104757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kcdzs,Uid:7c2362e3-2e7c-4b5a-bc52-8a2deafc3462,Namespace:kube-system,Attempt:0,}" May 13 03:39:53.183406 kubelet[2718]: I0513 03:39:53.180873 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g6zmv" podStartSLOduration=7.698454029 podStartE2EDuration="15.18085612s" podCreationTimestamp="2025-05-13 03:39:38 +0000 UTC" firstStartedPulling="2025-05-13 03:39:39.457153849 +0000 UTC m=+5.710773455" lastFinishedPulling="2025-05-13 03:39:46.93955594 +0000 UTC m=+13.193175546" observedRunningTime="2025-05-13 03:39:53.179886319 +0000 UTC m=+19.433505985" watchObservedRunningTime="2025-05-13 03:39:53.18085612 +0000 UTC m=+19.434475736" May 13 03:39:54.547623 systemd-networkd[1381]: cilium_host: Link UP May 13 03:39:54.554094 systemd-networkd[1381]: cilium_net: Link UP May 13 03:39:54.555057 systemd-networkd[1381]: cilium_net: Gained carrier May 13 03:39:54.555895 systemd-networkd[1381]: cilium_host: Gained carrier May 13 03:39:54.556097 systemd-networkd[1381]: cilium_net: Gained IPv6LL May 13 03:39:54.556714 systemd-networkd[1381]: cilium_host: Gained IPv6LL May 13 03:39:54.663418 systemd-networkd[1381]: cilium_vxlan: Link UP May 13 03:39:54.663427 systemd-networkd[1381]: cilium_vxlan: Gained carrier May 13 03:39:54.954353 kernel: NET: Registered PF_ALG protocol family May 13 03:39:55.706796 systemd-networkd[1381]: lxc_health: Link UP May 13 03:39:55.732802 systemd-networkd[1381]: lxc_health: Gained carrier May 13 03:39:55.944263 systemd-networkd[1381]: lxc69bfef0f6e33: Link UP May 13 03:39:55.956541 kernel: eth0: renamed from tmp31ecd May 13 03:39:55.975210 systemd-networkd[1381]: lxc2fb4f301eb0b: Link UP May 13 03:39:55.983354 kernel: eth0: renamed from tmp26fa6 May 13 03:39:55.981044 systemd-networkd[1381]: lxc69bfef0f6e33: Gained carrier May 13 03:39:56.000828 systemd-networkd[1381]: lxc2fb4f301eb0b: Gained carrier May 13 03:39:56.605412 systemd-networkd[1381]: cilium_vxlan: Gained IPv6LL May 13 03:39:57.117443 systemd-networkd[1381]: lxc69bfef0f6e33: Gained IPv6LL May 13 03:39:57.437374 systemd-networkd[1381]: lxc_health: Gained IPv6LL May 13 03:39:57.568493 systemd-networkd[1381]: lxc2fb4f301eb0b: Gained IPv6LL May 13 03:40:00.489704 containerd[1485]: time="2025-05-13T03:40:00.488703825Z" level=info msg="connecting to shim 31ecdbf2feb1963c3bfdf33f9a7393f704c1ee9b7a113d54250bd23d405b4703" address="unix:///run/containerd/s/ba71e02f22712ef5fd4453109f48254c2592c82f587b69f1058ab71637573637" namespace=k8s.io protocol=ttrpc version=3 May 13 03:40:00.545603 systemd[1]: Started cri-containerd-31ecdbf2feb1963c3bfdf33f9a7393f704c1ee9b7a113d54250bd23d405b4703.scope - libcontainer container 31ecdbf2feb1963c3bfdf33f9a7393f704c1ee9b7a113d54250bd23d405b4703. May 13 03:40:00.575787 containerd[1485]: time="2025-05-13T03:40:00.575395294Z" level=info msg="connecting to shim 26fa62e6569492b1671fadab4b874167b627feac724aa59502c3cdab26d95a36" address="unix:///run/containerd/s/f720481c7d7190ad0ca737f8479cec106f7e976dc787f93458403191a071ddb1" namespace=k8s.io protocol=ttrpc version=3 May 13 03:40:00.621651 systemd[1]: Started cri-containerd-26fa62e6569492b1671fadab4b874167b627feac724aa59502c3cdab26d95a36.scope - libcontainer container 26fa62e6569492b1671fadab4b874167b627feac724aa59502c3cdab26d95a36. May 13 03:40:00.634087 containerd[1485]: time="2025-05-13T03:40:00.634037671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jdwd6,Uid:2c0b8163-fd13-47da-b85f-9cca371fb6ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"31ecdbf2feb1963c3bfdf33f9a7393f704c1ee9b7a113d54250bd23d405b4703\"" May 13 03:40:00.639504 containerd[1485]: time="2025-05-13T03:40:00.639458812Z" level=info msg="CreateContainer within sandbox \"31ecdbf2feb1963c3bfdf33f9a7393f704c1ee9b7a113d54250bd23d405b4703\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 03:40:00.657380 containerd[1485]: time="2025-05-13T03:40:00.657338003Z" level=info msg="Container 1eafc664ca776727d015364cf79152e12a7e05fd2d76e7450685bde32f91e19d: CDI devices from CRI Config.CDIDevices: []" May 13 03:40:00.669745 containerd[1485]: time="2025-05-13T03:40:00.669692273Z" level=info msg="CreateContainer within sandbox \"31ecdbf2feb1963c3bfdf33f9a7393f704c1ee9b7a113d54250bd23d405b4703\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1eafc664ca776727d015364cf79152e12a7e05fd2d76e7450685bde32f91e19d\"" May 13 03:40:00.671354 containerd[1485]: time="2025-05-13T03:40:00.671294326Z" level=info msg="StartContainer for \"1eafc664ca776727d015364cf79152e12a7e05fd2d76e7450685bde32f91e19d\"" May 13 03:40:00.672119 containerd[1485]: time="2025-05-13T03:40:00.672089695Z" level=info msg="connecting to shim 1eafc664ca776727d015364cf79152e12a7e05fd2d76e7450685bde32f91e19d" address="unix:///run/containerd/s/ba71e02f22712ef5fd4453109f48254c2592c82f587b69f1058ab71637573637" protocol=ttrpc version=3 May 13 03:40:00.699408 systemd[1]: Started cri-containerd-1eafc664ca776727d015364cf79152e12a7e05fd2d76e7450685bde32f91e19d.scope - libcontainer container 1eafc664ca776727d015364cf79152e12a7e05fd2d76e7450685bde32f91e19d. May 13 03:40:00.721942 containerd[1485]: time="2025-05-13T03:40:00.721898122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kcdzs,Uid:7c2362e3-2e7c-4b5a-bc52-8a2deafc3462,Namespace:kube-system,Attempt:0,} returns sandbox id \"26fa62e6569492b1671fadab4b874167b627feac724aa59502c3cdab26d95a36\"" May 13 03:40:00.726634 containerd[1485]: time="2025-05-13T03:40:00.726601932Z" level=info msg="CreateContainer within sandbox \"26fa62e6569492b1671fadab4b874167b627feac724aa59502c3cdab26d95a36\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 03:40:00.745969 containerd[1485]: time="2025-05-13T03:40:00.745396177Z" level=info msg="Container 636a58c68363fcd35606ab5ddd244f82ef105da0162764693ccc603dd46a2edc: CDI devices from CRI Config.CDIDevices: []" May 13 03:40:00.758714 containerd[1485]: time="2025-05-13T03:40:00.758596399Z" level=info msg="CreateContainer within sandbox \"26fa62e6569492b1671fadab4b874167b627feac724aa59502c3cdab26d95a36\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"636a58c68363fcd35606ab5ddd244f82ef105da0162764693ccc603dd46a2edc\"" May 13 03:40:00.759603 containerd[1485]: time="2025-05-13T03:40:00.759486041Z" level=info msg="StartContainer for \"636a58c68363fcd35606ab5ddd244f82ef105da0162764693ccc603dd46a2edc\"" May 13 03:40:00.761032 containerd[1485]: time="2025-05-13T03:40:00.760998240Z" level=info msg="connecting to shim 636a58c68363fcd35606ab5ddd244f82ef105da0162764693ccc603dd46a2edc" address="unix:///run/containerd/s/f720481c7d7190ad0ca737f8479cec106f7e976dc787f93458403191a071ddb1" protocol=ttrpc version=3 May 13 03:40:00.765974 containerd[1485]: time="2025-05-13T03:40:00.765837556Z" level=info msg="StartContainer for \"1eafc664ca776727d015364cf79152e12a7e05fd2d76e7450685bde32f91e19d\" returns successfully" May 13 03:40:00.789395 systemd[1]: Started cri-containerd-636a58c68363fcd35606ab5ddd244f82ef105da0162764693ccc603dd46a2edc.scope - libcontainer container 636a58c68363fcd35606ab5ddd244f82ef105da0162764693ccc603dd46a2edc. May 13 03:40:00.840265 containerd[1485]: time="2025-05-13T03:40:00.840198183Z" level=info msg="StartContainer for \"636a58c68363fcd35606ab5ddd244f82ef105da0162764693ccc603dd46a2edc\" returns successfully" May 13 03:40:01.243050 kubelet[2718]: I0513 03:40:01.242911 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kcdzs" podStartSLOduration=22.24288327 podStartE2EDuration="22.24288327s" podCreationTimestamp="2025-05-13 03:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 03:40:01.209635496 +0000 UTC m=+27.463255192" watchObservedRunningTime="2025-05-13 03:40:01.24288327 +0000 UTC m=+27.496502916" May 13 03:40:01.281991 kubelet[2718]: I0513 03:40:01.281932 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jdwd6" podStartSLOduration=22.281913334 podStartE2EDuration="22.281913334s" podCreationTimestamp="2025-05-13 03:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 03:40:01.280058489 +0000 UTC m=+27.533678095" watchObservedRunningTime="2025-05-13 03:40:01.281913334 +0000 UTC m=+27.535532970" May 13 03:42:48.597765 update_engine[1469]: I20250513 03:42:48.596920 1469 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 13 03:42:48.597765 update_engine[1469]: I20250513 03:42:48.597386 1469 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 13 03:42:48.604303 update_engine[1469]: I20250513 03:42:48.599572 1469 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 13 03:42:48.604303 update_engine[1469]: I20250513 03:42:48.602927 1469 omaha_request_params.cc:62] Current group set to alpha May 13 03:42:48.605710 update_engine[1469]: I20250513 03:42:48.604564 1469 update_attempter.cc:499] Already updated boot flags. Skipping. May 13 03:42:48.605710 update_engine[1469]: I20250513 03:42:48.604612 1469 update_attempter.cc:643] Scheduling an action processor start. May 13 03:42:48.605710 update_engine[1469]: I20250513 03:42:48.604694 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 03:42:48.605710 update_engine[1469]: I20250513 03:42:48.604915 1469 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 13 03:42:48.605710 update_engine[1469]: I20250513 03:42:48.605152 1469 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 03:42:48.605710 update_engine[1469]: I20250513 03:42:48.605182 1469 omaha_request_action.cc:272] Request: May 13 03:42:48.605710 update_engine[1469]: May 13 03:42:48.605710 update_engine[1469]: May 13 03:42:48.605710 update_engine[1469]: May 13 03:42:48.605710 update_engine[1469]: May 13 03:42:48.605710 update_engine[1469]: May 13 03:42:48.605710 update_engine[1469]: May 13 03:42:48.605710 update_engine[1469]: May 13 03:42:48.605710 update_engine[1469]: May 13 03:42:48.605710 update_engine[1469]: I20250513 03:42:48.605217 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 03:42:48.611840 locksmithd[1499]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 13 03:42:48.613765 update_engine[1469]: I20250513 03:42:48.613687 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 03:42:48.615112 update_engine[1469]: I20250513 03:42:48.615007 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 03:42:48.620539 update_engine[1469]: E20250513 03:42:48.620433 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 03:42:48.620766 update_engine[1469]: I20250513 03:42:48.620672 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 13 03:42:58.506574 update_engine[1469]: I20250513 03:42:58.506455 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 03:42:58.508328 update_engine[1469]: I20250513 03:42:58.507038 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 03:42:58.509462 update_engine[1469]: I20250513 03:42:58.509397 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 03:42:58.513871 update_engine[1469]: E20250513 03:42:58.513655 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 03:42:58.514088 update_engine[1469]: I20250513 03:42:58.514005 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 13 03:43:08.508134 update_engine[1469]: I20250513 03:43:08.507385 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 03:43:08.512225 update_engine[1469]: I20250513 03:43:08.510109 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 03:43:08.512225 update_engine[1469]: I20250513 03:43:08.511267 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 03:43:08.518668 update_engine[1469]: E20250513 03:43:08.518589 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 03:43:08.518922 update_engine[1469]: I20250513 03:43:08.518832 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 13 03:43:18.506313 update_engine[1469]: I20250513 03:43:18.506125 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 03:43:18.507301 update_engine[1469]: I20250513 03:43:18.506824 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 03:43:18.507488 update_engine[1469]: I20250513 03:43:18.507417 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 03:43:18.512787 update_engine[1469]: E20250513 03:43:18.512683 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 03:43:18.513055 update_engine[1469]: I20250513 03:43:18.512863 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 03:43:18.513055 update_engine[1469]: I20250513 03:43:18.512929 1469 omaha_request_action.cc:617] Omaha request response: May 13 03:43:18.513705 update_engine[1469]: E20250513 03:43:18.513610 1469 omaha_request_action.cc:636] Omaha request network transfer failed. May 13 03:43:18.514385 update_engine[1469]: I20250513 03:43:18.514223 1469 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 13 03:43:18.514385 update_engine[1469]: I20250513 03:43:18.514343 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 03:43:18.514385 update_engine[1469]: I20250513 03:43:18.514371 1469 update_attempter.cc:306] Processing Done. May 13 03:43:18.516158 update_engine[1469]: E20250513 03:43:18.514503 1469 update_attempter.cc:619] Update failed. May 13 03:43:18.516158 update_engine[1469]: I20250513 03:43:18.514537 1469 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 13 03:43:18.516158 update_engine[1469]: I20250513 03:43:18.514551 1469 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 13 03:43:18.516158 update_engine[1469]: I20250513 03:43:18.514565 1469 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 13 03:43:18.516593 update_engine[1469]: I20250513 03:43:18.516330 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 03:43:18.516593 update_engine[1469]: I20250513 03:43:18.516498 1469 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 03:43:18.516593 update_engine[1469]: I20250513 03:43:18.516526 1469 omaha_request_action.cc:272] Request: May 13 03:43:18.516593 update_engine[1469]: May 13 03:43:18.516593 update_engine[1469]: May 13 03:43:18.516593 update_engine[1469]: May 13 03:43:18.516593 update_engine[1469]: May 13 03:43:18.516593 update_engine[1469]: May 13 03:43:18.516593 update_engine[1469]: May 13 03:43:18.516593 update_engine[1469]: I20250513 03:43:18.516547 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 03:43:18.517520 update_engine[1469]: I20250513 03:43:18.516903 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 03:43:18.520063 update_engine[1469]: I20250513 03:43:18.519920 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 03:43:18.523358 locksmithd[1499]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 13 03:43:18.526227 update_engine[1469]: E20250513 03:43:18.525465 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 03:43:18.526227 update_engine[1469]: I20250513 03:43:18.525647 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 03:43:18.526227 update_engine[1469]: I20250513 03:43:18.525679 1469 omaha_request_action.cc:617] Omaha request response: May 13 03:43:18.526227 update_engine[1469]: I20250513 03:43:18.525694 1469 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 03:43:18.526227 update_engine[1469]: I20250513 03:43:18.525707 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 03:43:18.526227 update_engine[1469]: I20250513 03:43:18.525719 1469 update_attempter.cc:306] Processing Done. May 13 03:43:18.526227 update_engine[1469]: I20250513 03:43:18.525733 1469 update_attempter.cc:310] Error event sent. May 13 03:43:18.526227 update_engine[1469]: I20250513 03:43:18.525774 1469 update_check_scheduler.cc:74] Next update check in 40m12s May 13 03:43:18.529411 locksmithd[1499]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 13 03:43:29.847285 systemd[1]: Started sshd@9-172.24.4.174:22-172.24.4.1:40222.service - OpenSSH per-connection server daemon (172.24.4.1:40222). May 13 03:43:31.303882 sshd[4036]: Accepted publickey for core from 172.24.4.1 port 40222 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:43:31.314742 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:43:31.345792 systemd-logind[1468]: New session 12 of user core. May 13 03:43:31.362707 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 03:43:32.136305 sshd[4041]: Connection closed by 172.24.4.1 port 40222 May 13 03:43:32.137023 sshd-session[4036]: pam_unix(sshd:session): session closed for user core May 13 03:43:32.151696 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. May 13 03:43:32.152928 systemd[1]: sshd@9-172.24.4.174:22-172.24.4.1:40222.service: Deactivated successfully. May 13 03:43:32.166745 systemd[1]: session-12.scope: Deactivated successfully. May 13 03:43:32.173523 systemd-logind[1468]: Removed session 12. May 13 03:43:37.161815 systemd[1]: Started sshd@10-172.24.4.174:22-172.24.4.1:47614.service - OpenSSH per-connection server daemon (172.24.4.1:47614). May 13 03:43:38.364321 sshd[4056]: Accepted publickey for core from 172.24.4.1 port 47614 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:43:38.366591 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:43:38.380147 systemd-logind[1468]: New session 13 of user core. May 13 03:43:38.394577 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 03:43:39.197301 sshd[4058]: Connection closed by 172.24.4.1 port 47614 May 13 03:43:39.198810 sshd-session[4056]: pam_unix(sshd:session): session closed for user core May 13 03:43:39.209415 systemd[1]: sshd@10-172.24.4.174:22-172.24.4.1:47614.service: Deactivated successfully. May 13 03:43:39.217649 systemd[1]: session-13.scope: Deactivated successfully. May 13 03:43:39.221627 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. May 13 03:43:39.227018 systemd-logind[1468]: Removed session 13. May 13 03:43:44.232678 systemd[1]: Started sshd@11-172.24.4.174:22-172.24.4.1:43702.service - OpenSSH per-connection server daemon (172.24.4.1:43702). May 13 03:43:45.383803 sshd[4073]: Accepted publickey for core from 172.24.4.1 port 43702 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:43:45.387665 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:43:45.405324 systemd-logind[1468]: New session 14 of user core. May 13 03:43:45.407751 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 03:43:46.271396 sshd[4075]: Connection closed by 172.24.4.1 port 43702 May 13 03:43:46.272922 sshd-session[4073]: pam_unix(sshd:session): session closed for user core May 13 03:43:46.281081 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. May 13 03:43:46.282159 systemd[1]: sshd@11-172.24.4.174:22-172.24.4.1:43702.service: Deactivated successfully. May 13 03:43:46.292014 systemd[1]: session-14.scope: Deactivated successfully. May 13 03:43:46.298109 systemd-logind[1468]: Removed session 14. May 13 03:43:51.305355 systemd[1]: Started sshd@12-172.24.4.174:22-172.24.4.1:43706.service - OpenSSH per-connection server daemon (172.24.4.1:43706). May 13 03:43:52.512468 sshd[4088]: Accepted publickey for core from 172.24.4.1 port 43706 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:43:52.519563 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:43:52.545842 systemd-logind[1468]: New session 15 of user core. May 13 03:43:52.558837 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 03:43:53.359682 sshd[4090]: Connection closed by 172.24.4.1 port 43706 May 13 03:43:53.362649 sshd-session[4088]: pam_unix(sshd:session): session closed for user core May 13 03:43:53.378969 systemd[1]: sshd@12-172.24.4.174:22-172.24.4.1:43706.service: Deactivated successfully. May 13 03:43:53.384271 systemd[1]: session-15.scope: Deactivated successfully. May 13 03:43:53.388754 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. May 13 03:43:53.396090 systemd[1]: Started sshd@13-172.24.4.174:22-172.24.4.1:43712.service - OpenSSH per-connection server daemon (172.24.4.1:43712). May 13 03:43:53.401730 systemd-logind[1468]: Removed session 15. May 13 03:43:54.585175 sshd[4102]: Accepted publickey for core from 172.24.4.1 port 43712 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:43:54.588174 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:43:54.600913 systemd-logind[1468]: New session 16 of user core. May 13 03:43:54.610601 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 03:43:55.434613 sshd[4105]: Connection closed by 172.24.4.1 port 43712 May 13 03:43:55.436202 sshd-session[4102]: pam_unix(sshd:session): session closed for user core May 13 03:43:55.455219 systemd[1]: sshd@13-172.24.4.174:22-172.24.4.1:43712.service: Deactivated successfully. May 13 03:43:55.462887 systemd[1]: session-16.scope: Deactivated successfully. May 13 03:43:55.468006 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. May 13 03:43:55.473859 systemd[1]: Started sshd@14-172.24.4.174:22-172.24.4.1:45848.service - OpenSSH per-connection server daemon (172.24.4.1:45848). May 13 03:43:55.482563 systemd-logind[1468]: Removed session 16. May 13 03:43:56.661494 sshd[4114]: Accepted publickey for core from 172.24.4.1 port 45848 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:43:56.665483 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:43:56.678418 systemd-logind[1468]: New session 17 of user core. May 13 03:43:56.688597 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 03:43:57.500282 sshd[4117]: Connection closed by 172.24.4.1 port 45848 May 13 03:43:57.501657 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 13 03:43:57.511884 systemd[1]: sshd@14-172.24.4.174:22-172.24.4.1:45848.service: Deactivated successfully. May 13 03:43:57.521528 systemd[1]: session-17.scope: Deactivated successfully. May 13 03:43:57.524122 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. May 13 03:43:57.527669 systemd-logind[1468]: Removed session 17. May 13 03:44:02.537836 systemd[1]: Started sshd@15-172.24.4.174:22-172.24.4.1:45850.service - OpenSSH per-connection server daemon (172.24.4.1:45850). May 13 03:44:03.615536 sshd[4130]: Accepted publickey for core from 172.24.4.1 port 45850 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:03.618843 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:03.632777 systemd-logind[1468]: New session 18 of user core. May 13 03:44:03.639583 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 03:44:04.466719 sshd[4132]: Connection closed by 172.24.4.1 port 45850 May 13 03:44:04.470569 sshd-session[4130]: pam_unix(sshd:session): session closed for user core May 13 03:44:04.491506 systemd[1]: sshd@15-172.24.4.174:22-172.24.4.1:45850.service: Deactivated successfully. May 13 03:44:04.499134 systemd[1]: session-18.scope: Deactivated successfully. May 13 03:44:04.506515 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. May 13 03:44:04.515534 systemd[1]: Started sshd@16-172.24.4.174:22-172.24.4.1:55634.service - OpenSSH per-connection server daemon (172.24.4.1:55634). May 13 03:44:04.523769 systemd-logind[1468]: Removed session 18. May 13 03:44:05.597971 sshd[4143]: Accepted publickey for core from 172.24.4.1 port 55634 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:05.602701 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:05.621847 systemd-logind[1468]: New session 19 of user core. May 13 03:44:05.628668 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 03:44:06.448128 sshd[4146]: Connection closed by 172.24.4.1 port 55634 May 13 03:44:06.449604 sshd-session[4143]: pam_unix(sshd:session): session closed for user core May 13 03:44:06.468367 systemd[1]: sshd@16-172.24.4.174:22-172.24.4.1:55634.service: Deactivated successfully. May 13 03:44:06.472212 systemd[1]: session-19.scope: Deactivated successfully. May 13 03:44:06.475077 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. May 13 03:44:06.481517 systemd[1]: Started sshd@17-172.24.4.174:22-172.24.4.1:55638.service - OpenSSH per-connection server daemon (172.24.4.1:55638). May 13 03:44:06.484808 systemd-logind[1468]: Removed session 19. May 13 03:44:07.676782 sshd[4155]: Accepted publickey for core from 172.24.4.1 port 55638 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:07.680821 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:07.694114 systemd-logind[1468]: New session 20 of user core. May 13 03:44:07.703574 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 03:44:09.926445 sshd[4158]: Connection closed by 172.24.4.1 port 55638 May 13 03:44:09.929945 sshd-session[4155]: pam_unix(sshd:session): session closed for user core May 13 03:44:09.953437 systemd[1]: sshd@17-172.24.4.174:22-172.24.4.1:55638.service: Deactivated successfully. May 13 03:44:09.960633 systemd[1]: session-20.scope: Deactivated successfully. May 13 03:44:09.964119 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. May 13 03:44:09.972856 systemd[1]: Started sshd@18-172.24.4.174:22-172.24.4.1:55646.service - OpenSSH per-connection server daemon (172.24.4.1:55646). May 13 03:44:09.978346 systemd-logind[1468]: Removed session 20. May 13 03:44:11.376394 sshd[4176]: Accepted publickey for core from 172.24.4.1 port 55646 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:11.380066 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:11.394433 systemd-logind[1468]: New session 21 of user core. May 13 03:44:11.405625 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 03:44:12.314323 sshd[4179]: Connection closed by 172.24.4.1 port 55646 May 13 03:44:12.314029 sshd-session[4176]: pam_unix(sshd:session): session closed for user core May 13 03:44:12.345450 systemd[1]: sshd@18-172.24.4.174:22-172.24.4.1:55646.service: Deactivated successfully. May 13 03:44:12.351877 systemd[1]: session-21.scope: Deactivated successfully. May 13 03:44:12.358557 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. May 13 03:44:12.365934 systemd[1]: Started sshd@19-172.24.4.174:22-172.24.4.1:55648.service - OpenSSH per-connection server daemon (172.24.4.1:55648). May 13 03:44:12.370122 systemd-logind[1468]: Removed session 21. May 13 03:44:13.544318 sshd[4188]: Accepted publickey for core from 172.24.4.1 port 55648 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:13.547711 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:13.563166 systemd-logind[1468]: New session 22 of user core. May 13 03:44:13.567596 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 03:44:14.293818 sshd[4191]: Connection closed by 172.24.4.1 port 55648 May 13 03:44:14.295549 sshd-session[4188]: pam_unix(sshd:session): session closed for user core May 13 03:44:14.306213 systemd[1]: sshd@19-172.24.4.174:22-172.24.4.1:55648.service: Deactivated successfully. May 13 03:44:14.311147 systemd[1]: session-22.scope: Deactivated successfully. May 13 03:44:14.313884 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. May 13 03:44:14.317131 systemd-logind[1468]: Removed session 22. May 13 03:44:19.322851 systemd[1]: Started sshd@20-172.24.4.174:22-172.24.4.1:49336.service - OpenSSH per-connection server daemon (172.24.4.1:49336). May 13 03:44:20.481568 sshd[4205]: Accepted publickey for core from 172.24.4.1 port 49336 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:20.484787 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:20.497377 systemd-logind[1468]: New session 23 of user core. May 13 03:44:20.506532 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 03:44:21.243510 sshd[4207]: Connection closed by 172.24.4.1 port 49336 May 13 03:44:21.244286 sshd-session[4205]: pam_unix(sshd:session): session closed for user core May 13 03:44:21.253198 systemd[1]: sshd@20-172.24.4.174:22-172.24.4.1:49336.service: Deactivated successfully. May 13 03:44:21.258325 systemd[1]: session-23.scope: Deactivated successfully. May 13 03:44:21.260431 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. May 13 03:44:21.263262 systemd-logind[1468]: Removed session 23. May 13 03:44:26.264061 systemd[1]: Started sshd@21-172.24.4.174:22-172.24.4.1:44992.service - OpenSSH per-connection server daemon (172.24.4.1:44992). May 13 03:44:27.529075 sshd[4218]: Accepted publickey for core from 172.24.4.1 port 44992 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:27.532406 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:27.548768 systemd-logind[1468]: New session 24 of user core. May 13 03:44:27.555663 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 03:44:28.234718 containerd[1485]: time="2025-05-13T03:44:28.233945295Z" level=warning msg="container event discarded" container=e67fc13ee684f0ea6f8da3cbbd4984a657977f62aa4d30c9244ea0cb6790816e type=CONTAINER_CREATED_EVENT May 13 03:44:28.234718 containerd[1485]: time="2025-05-13T03:44:28.234680669Z" level=warning msg="container event discarded" container=e67fc13ee684f0ea6f8da3cbbd4984a657977f62aa4d30c9244ea0cb6790816e type=CONTAINER_STARTED_EVENT May 13 03:44:28.268390 containerd[1485]: time="2025-05-13T03:44:28.268166416Z" level=warning msg="container event discarded" container=2c14a18e111e48a5ad454d6645630a8f7b475fda8b413df677e6a0c0e0266b3c type=CONTAINER_CREATED_EVENT May 13 03:44:28.268390 containerd[1485]: time="2025-05-13T03:44:28.268318765Z" level=warning msg="container event discarded" container=2c14a18e111e48a5ad454d6645630a8f7b475fda8b413df677e6a0c0e0266b3c type=CONTAINER_STARTED_EVENT May 13 03:44:28.280010 containerd[1485]: time="2025-05-13T03:44:28.279715532Z" level=warning msg="container event discarded" container=fa1d34c9fd4ddd3fbd37d7bb7616101396bee92cef2ee75a0f4bb9d2b8a91e7e type=CONTAINER_CREATED_EVENT May 13 03:44:28.306409 containerd[1485]: time="2025-05-13T03:44:28.306169701Z" level=warning msg="container event discarded" container=8bb9bad6304d915f4209ea7db37b651f2d29446644b7d44a5723cdc0d1cfd1fa type=CONTAINER_CREATED_EVENT May 13 03:44:28.306409 containerd[1485]: time="2025-05-13T03:44:28.306298284Z" level=warning msg="container event discarded" container=9cdf8ebf7fb24eee9ec213bfc05fe5ac6e1b3c3572b9cce1a82cd46862ffe765 type=CONTAINER_CREATED_EVENT May 13 03:44:28.306409 containerd[1485]: time="2025-05-13T03:44:28.306338560Z" level=warning msg="container event discarded" container=9cdf8ebf7fb24eee9ec213bfc05fe5ac6e1b3c3572b9cce1a82cd46862ffe765 type=CONTAINER_STARTED_EVENT May 13 03:44:28.355832 containerd[1485]: time="2025-05-13T03:44:28.355709918Z" level=warning msg="container event discarded" container=d2b58c612fbb8cfa98964b7794ef854114d8be25af7c10366ddbb3ca54f5945a type=CONTAINER_CREATED_EVENT May 13 03:44:28.405711 sshd[4220]: Connection closed by 172.24.4.1 port 44992 May 13 03:44:28.404650 sshd-session[4218]: pam_unix(sshd:session): session closed for user core May 13 03:44:28.412755 systemd[1]: sshd@21-172.24.4.174:22-172.24.4.1:44992.service: Deactivated successfully. May 13 03:44:28.418704 systemd[1]: session-24.scope: Deactivated successfully. May 13 03:44:28.420869 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. May 13 03:44:28.423070 systemd-logind[1468]: Removed session 24. May 13 03:44:28.432063 containerd[1485]: time="2025-05-13T03:44:28.431957233Z" level=warning msg="container event discarded" container=fa1d34c9fd4ddd3fbd37d7bb7616101396bee92cef2ee75a0f4bb9d2b8a91e7e type=CONTAINER_STARTED_EVENT May 13 03:44:28.452220 containerd[1485]: time="2025-05-13T03:44:28.452173175Z" level=warning msg="container event discarded" container=8bb9bad6304d915f4209ea7db37b651f2d29446644b7d44a5723cdc0d1cfd1fa type=CONTAINER_STARTED_EVENT May 13 03:44:28.494697 containerd[1485]: time="2025-05-13T03:44:28.494502113Z" level=warning msg="container event discarded" container=d2b58c612fbb8cfa98964b7794ef854114d8be25af7c10366ddbb3ca54f5945a type=CONTAINER_STARTED_EVENT May 13 03:44:33.429535 systemd[1]: Started sshd@22-172.24.4.174:22-172.24.4.1:44998.service - OpenSSH per-connection server daemon (172.24.4.1:44998). May 13 03:44:34.498493 sshd[4232]: Accepted publickey for core from 172.24.4.1 port 44998 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:34.501959 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:34.524908 systemd-logind[1468]: New session 25 of user core. May 13 03:44:34.535653 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 03:44:35.271893 sshd[4236]: Connection closed by 172.24.4.1 port 44998 May 13 03:44:35.274443 sshd-session[4232]: pam_unix(sshd:session): session closed for user core May 13 03:44:35.292165 systemd[1]: sshd@22-172.24.4.174:22-172.24.4.1:44998.service: Deactivated successfully. May 13 03:44:35.298533 systemd[1]: session-25.scope: Deactivated successfully. May 13 03:44:35.304072 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. May 13 03:44:35.310812 systemd[1]: Started sshd@23-172.24.4.174:22-172.24.4.1:40688.service - OpenSSH per-connection server daemon (172.24.4.1:40688). May 13 03:44:35.314927 systemd-logind[1468]: Removed session 25. May 13 03:44:36.536812 sshd[4247]: Accepted publickey for core from 172.24.4.1 port 40688 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:36.539961 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:36.555741 systemd-logind[1468]: New session 26 of user core. May 13 03:44:36.561623 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 03:44:39.162866 containerd[1485]: time="2025-05-13T03:44:39.161977376Z" level=info msg="StopContainer for \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" with timeout 30 (s)" May 13 03:44:39.164898 containerd[1485]: time="2025-05-13T03:44:39.164390927Z" level=info msg="Stop container \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" with signal terminated" May 13 03:44:39.168283 containerd[1485]: time="2025-05-13T03:44:39.168194264Z" level=warning msg="container event discarded" container=eeafcc0301cb35c2f815ba4e693a181c215cc3957b6b8ca1f7b1122a7dc9ae05 type=CONTAINER_CREATED_EVENT May 13 03:44:39.168283 containerd[1485]: time="2025-05-13T03:44:39.168268505Z" level=warning msg="container event discarded" container=eeafcc0301cb35c2f815ba4e693a181c215cc3957b6b8ca1f7b1122a7dc9ae05 type=CONTAINER_STARTED_EVENT May 13 03:44:39.209859 containerd[1485]: time="2025-05-13T03:44:39.209776959Z" level=warning msg="container event discarded" container=839dce74b514fce5cebd3010a43c94a0856e0722f07d274627bfb2ef50f6c136 type=CONTAINER_CREATED_EVENT May 13 03:44:39.255008 containerd[1485]: time="2025-05-13T03:44:39.254941322Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 03:44:39.277277 containerd[1485]: time="2025-05-13T03:44:39.276607250Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" id:\"68f6975d8910601fc8ca4e5bfd4dc3a347b64a8ace155d6d405bf3c34c6d3f15\" pid:4275 exited_at:{seconds:1747107879 nanos:275872775}" May 13 03:44:39.287420 containerd[1485]: time="2025-05-13T03:44:39.287086648Z" level=info msg="StopContainer for \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" with timeout 2 (s)" May 13 03:44:39.288154 containerd[1485]: time="2025-05-13T03:44:39.288102616Z" level=info msg="Stop container \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" with signal terminated" May 13 03:44:39.295935 containerd[1485]: time="2025-05-13T03:44:39.295840330Z" level=warning msg="container event discarded" container=839dce74b514fce5cebd3010a43c94a0856e0722f07d274627bfb2ef50f6c136 type=CONTAINER_STARTED_EVENT May 13 03:44:39.305647 systemd[1]: cri-containerd-66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7.scope: Deactivated successfully. May 13 03:44:39.306505 systemd[1]: cri-containerd-66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7.scope: Consumed 1.252s CPU time, 24.3M memory peak, 4K written to disk. May 13 03:44:39.311045 systemd-networkd[1381]: lxc_health: Link DOWN May 13 03:44:39.311054 systemd-networkd[1381]: lxc_health: Lost carrier May 13 03:44:39.337575 systemd[1]: cri-containerd-efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857.scope: Deactivated successfully. May 13 03:44:39.338021 systemd[1]: cri-containerd-efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857.scope: Consumed 10.395s CPU time, 125.6M memory peak, 128K read from disk, 13.3M written to disk. May 13 03:44:39.345731 containerd[1485]: time="2025-05-13T03:44:39.342912504Z" level=info msg="received exit event container_id:\"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" id:\"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" pid:3270 exited_at:{seconds:1747107879 nanos:341848023}" May 13 03:44:39.345731 containerd[1485]: time="2025-05-13T03:44:39.345433107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" id:\"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" pid:3270 exited_at:{seconds:1747107879 nanos:341848023}" May 13 03:44:39.345731 containerd[1485]: time="2025-05-13T03:44:39.345613910Z" level=info msg="received exit event container_id:\"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" id:\"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" pid:3344 exited_at:{seconds:1747107879 nanos:341127706}" May 13 03:44:39.345973 containerd[1485]: time="2025-05-13T03:44:39.345925141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" id:\"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" pid:3344 exited_at:{seconds:1747107879 nanos:341127706}" May 13 03:44:39.395185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857-rootfs.mount: Deactivated successfully. May 13 03:44:39.423837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7-rootfs.mount: Deactivated successfully. May 13 03:44:39.435104 containerd[1485]: time="2025-05-13T03:44:39.434047358Z" level=info msg="StopContainer for \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" returns successfully" May 13 03:44:39.435293 containerd[1485]: time="2025-05-13T03:44:39.435132467Z" level=info msg="StopPodSandbox for \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\"" May 13 03:44:39.435293 containerd[1485]: time="2025-05-13T03:44:39.435218661Z" level=info msg="Container to stop \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 03:44:39.436075 containerd[1485]: time="2025-05-13T03:44:39.436048927Z" level=info msg="StopContainer for \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" returns successfully" May 13 03:44:39.436853 containerd[1485]: time="2025-05-13T03:44:39.436798880Z" level=info msg="StopPodSandbox for \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\"" May 13 03:44:39.436949 containerd[1485]: time="2025-05-13T03:44:39.436885193Z" level=info msg="Container to stop \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 03:44:39.436949 containerd[1485]: time="2025-05-13T03:44:39.436901976Z" level=info msg="Container to stop \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 03:44:39.436949 containerd[1485]: time="2025-05-13T03:44:39.436946711Z" level=info msg="Container to stop \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 03:44:39.437098 containerd[1485]: time="2025-05-13T03:44:39.436960206Z" level=info msg="Container to stop \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 03:44:39.437098 containerd[1485]: time="2025-05-13T03:44:39.436979763Z" level=info msg="Container to stop \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 03:44:39.451782 systemd[1]: cri-containerd-e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338.scope: Deactivated successfully. May 13 03:44:39.459805 containerd[1485]: time="2025-05-13T03:44:39.459486727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" id:\"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" pid:2983 exit_status:137 exited_at:{seconds:1747107879 nanos:458722747}" May 13 03:44:39.463274 systemd[1]: cri-containerd-d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19.scope: Deactivated successfully. May 13 03:44:39.465949 containerd[1485]: time="2025-05-13T03:44:39.465888295Z" level=warning msg="container event discarded" container=d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19 type=CONTAINER_CREATED_EVENT May 13 03:44:39.466172 containerd[1485]: time="2025-05-13T03:44:39.466148399Z" level=warning msg="container event discarded" container=d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19 type=CONTAINER_STARTED_EVENT May 13 03:44:39.501951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19-rootfs.mount: Deactivated successfully. May 13 03:44:39.521177 containerd[1485]: time="2025-05-13T03:44:39.521132167Z" level=warning msg="container event discarded" container=e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338 type=CONTAINER_CREATED_EVENT May 13 03:44:39.521177 containerd[1485]: time="2025-05-13T03:44:39.521208953Z" level=warning msg="container event discarded" container=e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338 type=CONTAINER_STARTED_EVENT May 13 03:44:39.522157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338-rootfs.mount: Deactivated successfully. May 13 03:44:39.523277 containerd[1485]: time="2025-05-13T03:44:39.522692639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" id:\"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" pid:2906 exit_status:137 exited_at:{seconds:1747107879 nanos:464703097}" May 13 03:44:39.523277 containerd[1485]: time="2025-05-13T03:44:39.522998750Z" level=info msg="received exit event sandbox_id:\"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" exit_status:137 exited_at:{seconds:1747107879 nanos:464703097}" May 13 03:44:39.526297 containerd[1485]: time="2025-05-13T03:44:39.526264397Z" level=info msg="TearDown network for sandbox \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" successfully" May 13 03:44:39.526422 containerd[1485]: time="2025-05-13T03:44:39.526403661Z" level=info msg="StopPodSandbox for \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" returns successfully" May 13 03:44:39.526644 containerd[1485]: time="2025-05-13T03:44:39.526622587Z" level=info msg="received exit event sandbox_id:\"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" exit_status:137 exited_at:{seconds:1747107879 nanos:458722747}" May 13 03:44:39.527363 containerd[1485]: time="2025-05-13T03:44:39.527338014Z" level=info msg="TearDown network for sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" successfully" May 13 03:44:39.527467 containerd[1485]: time="2025-05-13T03:44:39.527448664Z" level=info msg="StopPodSandbox for \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" returns successfully" May 13 03:44:39.527634 containerd[1485]: time="2025-05-13T03:44:39.527612916Z" level=info msg="shim disconnected" id=d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19 namespace=k8s.io May 13 03:44:39.527742 containerd[1485]: time="2025-05-13T03:44:39.527700913Z" level=warning msg="cleaning up after shim disconnected" id=d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19 namespace=k8s.io May 13 03:44:39.528033 containerd[1485]: time="2025-05-13T03:44:39.527716613Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 03:44:39.537741 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338-shm.mount: Deactivated successfully. May 13 03:44:39.537889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19-shm.mount: Deactivated successfully. May 13 03:44:39.543195 containerd[1485]: time="2025-05-13T03:44:39.542785590Z" level=info msg="shim disconnected" id=e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338 namespace=k8s.io May 13 03:44:39.543195 containerd[1485]: time="2025-05-13T03:44:39.542855643Z" level=warning msg="cleaning up after shim disconnected" id=e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338 namespace=k8s.io May 13 03:44:39.545734 containerd[1485]: time="2025-05-13T03:44:39.542876333Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 03:44:39.606347 kubelet[2718]: I0513 03:44:39.605582 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-etc-cni-netd\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.606347 kubelet[2718]: I0513 03:44:39.605634 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-host-proc-sys-kernel\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.606347 kubelet[2718]: I0513 03:44:39.605667 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-hostproc\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.606347 kubelet[2718]: I0513 03:44:39.605686 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cni-path\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.606347 kubelet[2718]: I0513 03:44:39.605712 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fe07975-15be-419c-b043-80900aae2184-cilium-config-path\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.606347 kubelet[2718]: I0513 03:44:39.605742 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fe07975-15be-419c-b043-80900aae2184-clustermesh-secrets\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.607156 kubelet[2718]: I0513 03:44:39.605712 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.607156 kubelet[2718]: I0513 03:44:39.605767 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-hubble-tls\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.607156 kubelet[2718]: I0513 03:44:39.605810 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-xtables-lock\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.607156 kubelet[2718]: I0513 03:44:39.605822 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cni-path" (OuterVolumeSpecName: "cni-path") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.607156 kubelet[2718]: I0513 03:44:39.605835 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cilium-cgroup\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.607359 kubelet[2718]: I0513 03:44:39.605843 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.607359 kubelet[2718]: I0513 03:44:39.605856 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-lib-modules\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.607359 kubelet[2718]: I0513 03:44:39.605861 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-hostproc" (OuterVolumeSpecName: "hostproc") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.607359 kubelet[2718]: I0513 03:44:39.605876 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/192a1f39-6d73-48d5-88ae-2618d67d348d-cilium-config-path\") pod \"192a1f39-6d73-48d5-88ae-2618d67d348d\" (UID: \"192a1f39-6d73-48d5-88ae-2618d67d348d\") " May 13 03:44:39.607359 kubelet[2718]: I0513 03:44:39.605893 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-bpf-maps\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.607635 kubelet[2718]: I0513 03:44:39.605912 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-host-proc-sys-net\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.607635 kubelet[2718]: I0513 03:44:39.605932 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cilium-run\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.607635 kubelet[2718]: I0513 03:44:39.605966 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d2gp\" (UniqueName: \"kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-kube-api-access-5d2gp\") pod \"9fe07975-15be-419c-b043-80900aae2184\" (UID: \"9fe07975-15be-419c-b043-80900aae2184\") " May 13 03:44:39.607635 kubelet[2718]: I0513 03:44:39.605987 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc9ls\" (UniqueName: \"kubernetes.io/projected/192a1f39-6d73-48d5-88ae-2618d67d348d-kube-api-access-dc9ls\") pod \"192a1f39-6d73-48d5-88ae-2618d67d348d\" (UID: \"192a1f39-6d73-48d5-88ae-2618d67d348d\") " May 13 03:44:39.607635 kubelet[2718]: I0513 03:44:39.606042 2718 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-etc-cni-netd\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.607635 kubelet[2718]: I0513 03:44:39.606057 2718 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-host-proc-sys-kernel\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.607843 kubelet[2718]: I0513 03:44:39.606069 2718 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-hostproc\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.607843 kubelet[2718]: I0513 03:44:39.606081 2718 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cni-path\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.607843 kubelet[2718]: I0513 03:44:39.607769 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.607843 kubelet[2718]: I0513 03:44:39.607828 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.607988 kubelet[2718]: I0513 03:44:39.607852 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.609286 kubelet[2718]: I0513 03:44:39.608121 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.609286 kubelet[2718]: I0513 03:44:39.608155 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.609286 kubelet[2718]: I0513 03:44:39.608193 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 03:44:39.611063 kubelet[2718]: I0513 03:44:39.611025 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 03:44:39.613909 kubelet[2718]: I0513 03:44:39.613863 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-kube-api-access-5d2gp" (OuterVolumeSpecName: "kube-api-access-5d2gp") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "kube-api-access-5d2gp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 03:44:39.614059 kubelet[2718]: I0513 03:44:39.614018 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a1f39-6d73-48d5-88ae-2618d67d348d-kube-api-access-dc9ls" (OuterVolumeSpecName: "kube-api-access-dc9ls") pod "192a1f39-6d73-48d5-88ae-2618d67d348d" (UID: "192a1f39-6d73-48d5-88ae-2618d67d348d"). InnerVolumeSpecName "kube-api-access-dc9ls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 03:44:39.618034 kubelet[2718]: I0513 03:44:39.617996 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fe07975-15be-419c-b043-80900aae2184-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 03:44:39.620545 kubelet[2718]: I0513 03:44:39.620480 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe07975-15be-419c-b043-80900aae2184-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9fe07975-15be-419c-b043-80900aae2184" (UID: "9fe07975-15be-419c-b043-80900aae2184"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 03:44:39.620719 kubelet[2718]: I0513 03:44:39.620692 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192a1f39-6d73-48d5-88ae-2618d67d348d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "192a1f39-6d73-48d5-88ae-2618d67d348d" (UID: "192a1f39-6d73-48d5-88ae-2618d67d348d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 03:44:39.709329 kubelet[2718]: I0513 03:44:39.707160 2718 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-host-proc-sys-net\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.709329 kubelet[2718]: I0513 03:44:39.707348 2718 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cilium-run\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.709329 kubelet[2718]: I0513 03:44:39.707398 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5d2gp\" (UniqueName: \"kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-kube-api-access-5d2gp\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.709329 kubelet[2718]: I0513 03:44:39.707426 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dc9ls\" (UniqueName: \"kubernetes.io/projected/192a1f39-6d73-48d5-88ae-2618d67d348d-kube-api-access-dc9ls\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.709329 kubelet[2718]: I0513 03:44:39.707452 2718 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fe07975-15be-419c-b043-80900aae2184-cilium-config-path\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.709329 kubelet[2718]: I0513 03:44:39.707522 2718 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fe07975-15be-419c-b043-80900aae2184-clustermesh-secrets\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.709329 kubelet[2718]: I0513 03:44:39.707548 2718 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fe07975-15be-419c-b043-80900aae2184-hubble-tls\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.710082 kubelet[2718]: I0513 03:44:39.707578 2718 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-xtables-lock\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.710082 kubelet[2718]: I0513 03:44:39.707605 2718 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-cilium-cgroup\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.710082 kubelet[2718]: I0513 03:44:39.707628 2718 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-lib-modules\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.710082 kubelet[2718]: I0513 03:44:39.707655 2718 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/192a1f39-6d73-48d5-88ae-2618d67d348d-cilium-config-path\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:39.710082 kubelet[2718]: I0513 03:44:39.707679 2718 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fe07975-15be-419c-b043-80900aae2184-bpf-maps\") on node \"ci-4284-0-0-n-62b177a255.novalocal\" DevicePath \"\"" May 13 03:44:40.034474 systemd[1]: Removed slice kubepods-burstable-pod9fe07975_15be_419c_b043_80900aae2184.slice - libcontainer container kubepods-burstable-pod9fe07975_15be_419c_b043_80900aae2184.slice. May 13 03:44:40.035116 systemd[1]: kubepods-burstable-pod9fe07975_15be_419c_b043_80900aae2184.slice: Consumed 10.494s CPU time, 126.1M memory peak, 128K read from disk, 15.4M written to disk. May 13 03:44:40.040784 systemd[1]: Removed slice kubepods-besteffort-pod192a1f39_6d73_48d5_88ae_2618d67d348d.slice - libcontainer container kubepods-besteffort-pod192a1f39_6d73_48d5_88ae_2618d67d348d.slice. May 13 03:44:40.041048 systemd[1]: kubepods-besteffort-pod192a1f39_6d73_48d5_88ae_2618d67d348d.slice: Consumed 1.281s CPU time, 24.6M memory peak, 4K written to disk. May 13 03:44:40.406784 systemd[1]: var-lib-kubelet-pods-9fe07975\x2d15be\x2d419c\x2db043\x2d80900aae2184-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 03:44:40.407617 systemd[1]: var-lib-kubelet-pods-9fe07975\x2d15be\x2d419c\x2db043\x2d80900aae2184-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 03:44:40.407818 systemd[1]: var-lib-kubelet-pods-192a1f39\x2d6d73\x2d48d5\x2d88ae\x2d2618d67d348d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddc9ls.mount: Deactivated successfully. May 13 03:44:40.408015 systemd[1]: var-lib-kubelet-pods-9fe07975\x2d15be\x2d419c\x2db043\x2d80900aae2184-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5d2gp.mount: Deactivated successfully. May 13 03:44:40.412218 kubelet[2718]: I0513 03:44:40.411816 2718 scope.go:117] "RemoveContainer" containerID="66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7" May 13 03:44:40.435945 containerd[1485]: time="2025-05-13T03:44:40.435854520Z" level=info msg="RemoveContainer for \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\"" May 13 03:44:40.495252 containerd[1485]: time="2025-05-13T03:44:40.494749658Z" level=info msg="RemoveContainer for \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" returns successfully" May 13 03:44:40.499704 kubelet[2718]: I0513 03:44:40.498041 2718 scope.go:117] "RemoveContainer" containerID="66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7" May 13 03:44:40.501520 containerd[1485]: time="2025-05-13T03:44:40.501358101Z" level=error msg="ContainerStatus for \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\": not found" May 13 03:44:40.503429 kubelet[2718]: E0513 03:44:40.503394 2718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\": not found" containerID="66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7" May 13 03:44:40.505479 kubelet[2718]: I0513 03:44:40.503663 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7"} err="failed to get container status \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7\": not found" May 13 03:44:40.505479 kubelet[2718]: I0513 03:44:40.505393 2718 scope.go:117] "RemoveContainer" containerID="efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857" May 13 03:44:40.510583 containerd[1485]: time="2025-05-13T03:44:40.508592253Z" level=info msg="RemoveContainer for \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\"" May 13 03:44:40.519705 containerd[1485]: time="2025-05-13T03:44:40.519665872Z" level=info msg="RemoveContainer for \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" returns successfully" May 13 03:44:40.520037 kubelet[2718]: I0513 03:44:40.520016 2718 scope.go:117] "RemoveContainer" containerID="0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0" May 13 03:44:40.523121 containerd[1485]: time="2025-05-13T03:44:40.523088337Z" level=info msg="RemoveContainer for \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\"" May 13 03:44:40.534462 containerd[1485]: time="2025-05-13T03:44:40.534375502Z" level=info msg="RemoveContainer for \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\" returns successfully" May 13 03:44:40.536361 kubelet[2718]: I0513 03:44:40.534618 2718 scope.go:117] "RemoveContainer" containerID="73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd" May 13 03:44:40.541364 containerd[1485]: time="2025-05-13T03:44:40.541325333Z" level=info msg="RemoveContainer for \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\"" May 13 03:44:40.548678 containerd[1485]: time="2025-05-13T03:44:40.548631130Z" level=info msg="RemoveContainer for \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\" returns successfully" May 13 03:44:40.549128 kubelet[2718]: I0513 03:44:40.549089 2718 scope.go:117] "RemoveContainer" containerID="2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439" May 13 03:44:40.550709 containerd[1485]: time="2025-05-13T03:44:40.550679128Z" level=info msg="RemoveContainer for \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\"" May 13 03:44:40.555530 containerd[1485]: time="2025-05-13T03:44:40.555477685Z" level=info msg="RemoveContainer for \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\" returns successfully" May 13 03:44:40.556021 kubelet[2718]: I0513 03:44:40.555994 2718 scope.go:117] "RemoveContainer" containerID="be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e" May 13 03:44:40.558045 containerd[1485]: time="2025-05-13T03:44:40.558003028Z" level=info msg="RemoveContainer for \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\"" May 13 03:44:40.562040 containerd[1485]: time="2025-05-13T03:44:40.561999725Z" level=info msg="RemoveContainer for \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\" returns successfully" May 13 03:44:40.562273 kubelet[2718]: I0513 03:44:40.562162 2718 scope.go:117] "RemoveContainer" containerID="efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857" May 13 03:44:40.562622 containerd[1485]: time="2025-05-13T03:44:40.562573624Z" level=error msg="ContainerStatus for \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\": not found" May 13 03:44:40.562825 kubelet[2718]: E0513 03:44:40.562770 2718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\": not found" containerID="efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857" May 13 03:44:40.562901 kubelet[2718]: I0513 03:44:40.562820 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857"} err="failed to get container status \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\": rpc error: code = NotFound desc = an error occurred when try to find container \"efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857\": not found" May 13 03:44:40.562901 kubelet[2718]: I0513 03:44:40.562846 2718 scope.go:117] "RemoveContainer" containerID="0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0" May 13 03:44:40.563407 containerd[1485]: time="2025-05-13T03:44:40.563127255Z" level=error msg="ContainerStatus for \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\": not found" May 13 03:44:40.563467 kubelet[2718]: E0513 03:44:40.563280 2718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\": not found" containerID="0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0" May 13 03:44:40.563467 kubelet[2718]: I0513 03:44:40.563307 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0"} err="failed to get container status \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0\": not found" May 13 03:44:40.563467 kubelet[2718]: I0513 03:44:40.563330 2718 scope.go:117] "RemoveContainer" containerID="73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd" May 13 03:44:40.563569 containerd[1485]: time="2025-05-13T03:44:40.563494993Z" level=error msg="ContainerStatus for \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\": not found" May 13 03:44:40.563742 kubelet[2718]: E0513 03:44:40.563671 2718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\": not found" containerID="73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd" May 13 03:44:40.563949 kubelet[2718]: I0513 03:44:40.563727 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd"} err="failed to get container status \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\": rpc error: code = NotFound desc = an error occurred when try to find container \"73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd\": not found" May 13 03:44:40.563949 kubelet[2718]: I0513 03:44:40.563882 2718 scope.go:117] "RemoveContainer" containerID="2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439" May 13 03:44:40.564163 containerd[1485]: time="2025-05-13T03:44:40.564125620Z" level=error msg="ContainerStatus for \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\": not found" May 13 03:44:40.564444 kubelet[2718]: E0513 03:44:40.564343 2718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\": not found" containerID="2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439" May 13 03:44:40.564444 kubelet[2718]: I0513 03:44:40.564374 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439"} err="failed to get container status \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439\": not found" May 13 03:44:40.564444 kubelet[2718]: I0513 03:44:40.564390 2718 scope.go:117] "RemoveContainer" containerID="be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e" May 13 03:44:40.564743 containerd[1485]: time="2025-05-13T03:44:40.564669191Z" level=error msg="ContainerStatus for \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\": not found" May 13 03:44:40.564855 kubelet[2718]: E0513 03:44:40.564823 2718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\": not found" containerID="be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e" May 13 03:44:40.564919 kubelet[2718]: I0513 03:44:40.564848 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e"} err="failed to get container status \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e\": not found" May 13 03:44:41.249888 sshd[4250]: Connection closed by 172.24.4.1 port 40688 May 13 03:44:41.251350 sshd-session[4247]: pam_unix(sshd:session): session closed for user core May 13 03:44:41.273916 systemd[1]: sshd@23-172.24.4.174:22-172.24.4.1:40688.service: Deactivated successfully. May 13 03:44:41.281604 systemd[1]: session-26.scope: Deactivated successfully. May 13 03:44:41.282562 systemd[1]: session-26.scope: Consumed 1.500s CPU time, 23.8M memory peak. May 13 03:44:41.288397 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. May 13 03:44:41.291917 systemd[1]: Started sshd@24-172.24.4.174:22-172.24.4.1:40704.service - OpenSSH per-connection server daemon (172.24.4.1:40704). May 13 03:44:41.299411 systemd-logind[1468]: Removed session 26. May 13 03:44:42.021377 kubelet[2718]: I0513 03:44:42.021157 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192a1f39-6d73-48d5-88ae-2618d67d348d" path="/var/lib/kubelet/pods/192a1f39-6d73-48d5-88ae-2618d67d348d/volumes" May 13 03:44:42.023031 kubelet[2718]: I0513 03:44:42.022960 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe07975-15be-419c-b043-80900aae2184" path="/var/lib/kubelet/pods/9fe07975-15be-419c-b043-80900aae2184/volumes" May 13 03:44:42.741624 sshd[4394]: Accepted publickey for core from 172.24.4.1 port 40704 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:42.745087 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:42.759348 systemd-logind[1468]: New session 27 of user core. May 13 03:44:42.763585 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 03:44:44.239460 kubelet[2718]: E0513 03:44:44.239274 2718 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 03:44:44.665324 kubelet[2718]: I0513 03:44:44.664205 2718 memory_manager.go:355] "RemoveStaleState removing state" podUID="9fe07975-15be-419c-b043-80900aae2184" containerName="cilium-agent" May 13 03:44:44.665324 kubelet[2718]: I0513 03:44:44.665325 2718 memory_manager.go:355] "RemoveStaleState removing state" podUID="192a1f39-6d73-48d5-88ae-2618d67d348d" containerName="cilium-operator" May 13 03:44:44.668391 kubelet[2718]: I0513 03:44:44.668342 2718 status_manager.go:890] "Failed to get status for pod" podUID="8da44cb0-8c82-4730-a439-555dcec27c95" pod="kube-system/cilium-xpcds" err="pods \"cilium-xpcds\" is forbidden: User \"system:node:ci-4284-0-0-n-62b177a255.novalocal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object" May 13 03:44:44.672428 kubelet[2718]: W0513 03:44:44.671226 2718 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4284-0-0-n-62b177a255.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object May 13 03:44:44.672428 kubelet[2718]: E0513 03:44:44.671327 2718 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4284-0-0-n-62b177a255.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object" logger="UnhandledError" May 13 03:44:44.672428 kubelet[2718]: W0513 03:44:44.671392 2718 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4284-0-0-n-62b177a255.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object May 13 03:44:44.672428 kubelet[2718]: E0513 03:44:44.671435 2718 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4284-0-0-n-62b177a255.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object" logger="UnhandledError" May 13 03:44:44.672898 kubelet[2718]: W0513 03:44:44.671518 2718 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4284-0-0-n-62b177a255.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object May 13 03:44:44.672898 kubelet[2718]: E0513 03:44:44.671543 2718 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4284-0-0-n-62b177a255.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object" logger="UnhandledError" May 13 03:44:44.672898 kubelet[2718]: W0513 03:44:44.672320 2718 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4284-0-0-n-62b177a255.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object May 13 03:44:44.672898 kubelet[2718]: E0513 03:44:44.672377 2718 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4284-0-0-n-62b177a255.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-62b177a255.novalocal' and this object" logger="UnhandledError" May 13 03:44:44.682488 systemd[1]: Created slice kubepods-burstable-pod8da44cb0_8c82_4730_a439_555dcec27c95.slice - libcontainer container kubepods-burstable-pod8da44cb0_8c82_4730_a439_555dcec27c95.slice. May 13 03:44:44.739377 sshd[4397]: Connection closed by 172.24.4.1 port 40704 May 13 03:44:44.740285 sshd-session[4394]: pam_unix(sshd:session): session closed for user core May 13 03:44:44.751825 systemd[1]: sshd@24-172.24.4.174:22-172.24.4.1:40704.service: Deactivated successfully. May 13 03:44:44.753948 systemd[1]: session-27.scope: Deactivated successfully. May 13 03:44:44.754188 systemd[1]: session-27.scope: Consumed 1.342s CPU time, 23.9M memory peak. May 13 03:44:44.756324 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. May 13 03:44:44.758635 systemd[1]: Started sshd@25-172.24.4.174:22-172.24.4.1:59794.service - OpenSSH per-connection server daemon (172.24.4.1:59794). May 13 03:44:44.760343 systemd-logind[1468]: Removed session 27. May 13 03:44:44.767734 kubelet[2718]: I0513 03:44:44.767673 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8da44cb0-8c82-4730-a439-555dcec27c95-cilium-ipsec-secrets\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.767734 kubelet[2718]: I0513 03:44:44.767735 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-lib-modules\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.767868 kubelet[2718]: I0513 03:44:44.767761 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8da44cb0-8c82-4730-a439-555dcec27c95-clustermesh-secrets\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.767868 kubelet[2718]: I0513 03:44:44.767787 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-host-proc-sys-kernel\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.767868 kubelet[2718]: I0513 03:44:44.767816 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-cni-path\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.767868 kubelet[2718]: I0513 03:44:44.767842 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8da44cb0-8c82-4730-a439-555dcec27c95-hubble-tls\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.768034 kubelet[2718]: I0513 03:44:44.767866 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8wzc\" (UniqueName: \"kubernetes.io/projected/8da44cb0-8c82-4730-a439-555dcec27c95-kube-api-access-v8wzc\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.768034 kubelet[2718]: I0513 03:44:44.767903 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-cilium-run\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.768034 kubelet[2718]: I0513 03:44:44.767932 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-etc-cni-netd\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.768034 kubelet[2718]: I0513 03:44:44.767959 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8da44cb0-8c82-4730-a439-555dcec27c95-cilium-config-path\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.768034 kubelet[2718]: I0513 03:44:44.767985 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-bpf-maps\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.768171 kubelet[2718]: I0513 03:44:44.768032 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-host-proc-sys-net\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.768171 kubelet[2718]: I0513 03:44:44.768079 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-hostproc\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.768171 kubelet[2718]: I0513 03:44:44.768117 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-cilium-cgroup\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:44.768171 kubelet[2718]: I0513 03:44:44.768138 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8da44cb0-8c82-4730-a439-555dcec27c95-xtables-lock\") pod \"cilium-xpcds\" (UID: \"8da44cb0-8c82-4730-a439-555dcec27c95\") " pod="kube-system/cilium-xpcds" May 13 03:44:45.872366 kubelet[2718]: E0513 03:44:45.872112 2718 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 13 03:44:45.874948 kubelet[2718]: E0513 03:44:45.872588 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da44cb0-8c82-4730-a439-555dcec27c95-clustermesh-secrets podName:8da44cb0-8c82-4730-a439-555dcec27c95 nodeName:}" failed. No retries permitted until 2025-05-13 03:44:46.37240049 +0000 UTC m=+312.626020146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/8da44cb0-8c82-4730-a439-555dcec27c95-clustermesh-secrets") pod "cilium-xpcds" (UID: "8da44cb0-8c82-4730-a439-555dcec27c95") : failed to sync secret cache: timed out waiting for the condition May 13 03:44:45.874948 kubelet[2718]: E0513 03:44:45.873437 2718 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 13 03:44:45.874948 kubelet[2718]: E0513 03:44:45.873553 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da44cb0-8c82-4730-a439-555dcec27c95-cilium-ipsec-secrets podName:8da44cb0-8c82-4730-a439-555dcec27c95 nodeName:}" failed. No retries permitted until 2025-05-13 03:44:46.37352163 +0000 UTC m=+312.627141296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/8da44cb0-8c82-4730-a439-555dcec27c95-cilium-ipsec-secrets") pod "cilium-xpcds" (UID: "8da44cb0-8c82-4730-a439-555dcec27c95") : failed to sync secret cache: timed out waiting for the condition May 13 03:44:45.876334 kubelet[2718]: E0513 03:44:45.875417 2718 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 13 03:44:45.876334 kubelet[2718]: E0513 03:44:45.875553 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8da44cb0-8c82-4730-a439-555dcec27c95-cilium-config-path podName:8da44cb0-8c82-4730-a439-555dcec27c95 nodeName:}" failed. No retries permitted until 2025-05-13 03:44:46.375518072 +0000 UTC m=+312.629137728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/8da44cb0-8c82-4730-a439-555dcec27c95-cilium-config-path") pod "cilium-xpcds" (UID: "8da44cb0-8c82-4730-a439-555dcec27c95") : failed to sync configmap cache: timed out waiting for the condition May 13 03:44:46.165556 sshd[4407]: Accepted publickey for core from 172.24.4.1 port 59794 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:46.172047 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:46.192472 systemd-logind[1468]: New session 28 of user core. May 13 03:44:46.204389 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 03:44:46.499429 containerd[1485]: time="2025-05-13T03:44:46.495636965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpcds,Uid:8da44cb0-8c82-4730-a439-555dcec27c95,Namespace:kube-system,Attempt:0,}" May 13 03:44:46.572180 containerd[1485]: time="2025-05-13T03:44:46.571020818Z" level=info msg="connecting to shim 1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0" address="unix:///run/containerd/s/40336659e0869ec017bbcc3f5a774c717fc6c33adbfb87e314664c1ce736b012" namespace=k8s.io protocol=ttrpc version=3 May 13 03:44:46.621404 systemd[1]: Started cri-containerd-1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0.scope - libcontainer container 1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0. May 13 03:44:46.652433 containerd[1485]: time="2025-05-13T03:44:46.652214010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpcds,Uid:8da44cb0-8c82-4730-a439-555dcec27c95,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\"" May 13 03:44:46.657399 containerd[1485]: time="2025-05-13T03:44:46.657359475Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 03:44:46.669828 containerd[1485]: time="2025-05-13T03:44:46.668842730Z" level=info msg="Container 801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a: CDI devices from CRI Config.CDIDevices: []" May 13 03:44:46.682321 containerd[1485]: time="2025-05-13T03:44:46.682281501Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a\"" May 13 03:44:46.684353 containerd[1485]: time="2025-05-13T03:44:46.684324452Z" level=info msg="StartContainer for \"801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a\"" May 13 03:44:46.685996 containerd[1485]: time="2025-05-13T03:44:46.685962795Z" level=info msg="connecting to shim 801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a" address="unix:///run/containerd/s/40336659e0869ec017bbcc3f5a774c717fc6c33adbfb87e314664c1ce736b012" protocol=ttrpc version=3 May 13 03:44:46.710444 systemd[1]: Started cri-containerd-801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a.scope - libcontainer container 801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a. May 13 03:44:46.749829 containerd[1485]: time="2025-05-13T03:44:46.749631683Z" level=info msg="StartContainer for \"801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a\" returns successfully" May 13 03:44:46.767575 systemd[1]: cri-containerd-801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a.scope: Deactivated successfully. May 13 03:44:46.769757 containerd[1485]: time="2025-05-13T03:44:46.769419595Z" level=info msg="received exit event container_id:\"801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a\" id:\"801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a\" pid:4473 exited_at:{seconds:1747107886 nanos:768942960}" May 13 03:44:46.769757 containerd[1485]: time="2025-05-13T03:44:46.769706090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a\" id:\"801ac4e16ba1c47b0af5987dec4abb5296cef86ee03667814e51ae6373a8b02a\" pid:4473 exited_at:{seconds:1747107886 nanos:768942960}" May 13 03:44:46.784127 sshd[4412]: Connection closed by 172.24.4.1 port 59794 May 13 03:44:46.785419 sshd-session[4407]: pam_unix(sshd:session): session closed for user core May 13 03:44:46.805507 systemd[1]: Started sshd@26-172.24.4.174:22-172.24.4.1:59806.service - OpenSSH per-connection server daemon (172.24.4.1:59806). May 13 03:44:46.808466 systemd[1]: sshd@25-172.24.4.174:22-172.24.4.1:59794.service: Deactivated successfully. May 13 03:44:46.814393 systemd[1]: session-28.scope: Deactivated successfully. May 13 03:44:46.822010 systemd-logind[1468]: Session 28 logged out. Waiting for processes to exit. May 13 03:44:46.826610 systemd-logind[1468]: Removed session 28. May 13 03:44:46.992660 containerd[1485]: time="2025-05-13T03:44:46.992346340Z" level=warning msg="container event discarded" container=be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e type=CONTAINER_CREATED_EVENT May 13 03:44:47.053037 containerd[1485]: time="2025-05-13T03:44:47.052946261Z" level=warning msg="container event discarded" container=be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e type=CONTAINER_STARTED_EVENT May 13 03:44:47.561861 containerd[1485]: time="2025-05-13T03:44:47.561757626Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 03:44:47.588367 containerd[1485]: time="2025-05-13T03:44:47.588155102Z" level=info msg="Container d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154: CDI devices from CRI Config.CDIDevices: []" May 13 03:44:47.602885 containerd[1485]: time="2025-05-13T03:44:47.602843070Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154\"" May 13 03:44:47.606007 containerd[1485]: time="2025-05-13T03:44:47.604446456Z" level=info msg="StartContainer for \"d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154\"" May 13 03:44:47.606007 containerd[1485]: time="2025-05-13T03:44:47.605499588Z" level=info msg="connecting to shim d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154" address="unix:///run/containerd/s/40336659e0869ec017bbcc3f5a774c717fc6c33adbfb87e314664c1ce736b012" protocol=ttrpc version=3 May 13 03:44:47.639423 systemd[1]: Started cri-containerd-d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154.scope - libcontainer container d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154. May 13 03:44:47.678092 containerd[1485]: time="2025-05-13T03:44:47.678056180Z" level=info msg="StartContainer for \"d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154\" returns successfully" May 13 03:44:47.690493 systemd[1]: cri-containerd-d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154.scope: Deactivated successfully. May 13 03:44:47.692459 containerd[1485]: time="2025-05-13T03:44:47.692418118Z" level=info msg="received exit event container_id:\"d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154\" id:\"d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154\" pid:4521 exited_at:{seconds:1747107887 nanos:691448876}" May 13 03:44:47.693043 containerd[1485]: time="2025-05-13T03:44:47.693003310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154\" id:\"d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154\" pid:4521 exited_at:{seconds:1747107887 nanos:691448876}" May 13 03:44:47.717809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8456af09755c3872a694419547ce38df888d4dd1d1721ec1f7445cb93250154-rootfs.mount: Deactivated successfully. May 13 03:44:48.039430 sshd[4504]: Accepted publickey for core from 172.24.4.1 port 59806 ssh2: RSA SHA256:opboDc8cTXVJHtjjKc0iUyBIh5veaBDSbNiIl+xLW2w May 13 03:44:48.044875 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 03:44:48.065348 systemd-logind[1468]: New session 29 of user core. May 13 03:44:48.075603 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 03:44:48.344644 containerd[1485]: time="2025-05-13T03:44:48.344481310Z" level=warning msg="container event discarded" container=be58dfa33e66d34753e31022089af105a1f9b73aeb46c2b51e040bcac8c9cd1e type=CONTAINER_STOPPED_EVENT May 13 03:44:48.550357 containerd[1485]: time="2025-05-13T03:44:48.549149757Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 03:44:48.565325 containerd[1485]: time="2025-05-13T03:44:48.565249031Z" level=info msg="Container 9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9: CDI devices from CRI Config.CDIDevices: []" May 13 03:44:48.583026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069839356.mount: Deactivated successfully. May 13 03:44:48.590814 containerd[1485]: time="2025-05-13T03:44:48.590598243Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9\"" May 13 03:44:48.594907 containerd[1485]: time="2025-05-13T03:44:48.593000998Z" level=info msg="StartContainer for \"9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9\"" May 13 03:44:48.596375 containerd[1485]: time="2025-05-13T03:44:48.595722701Z" level=info msg="connecting to shim 9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9" address="unix:///run/containerd/s/40336659e0869ec017bbcc3f5a774c717fc6c33adbfb87e314664c1ce736b012" protocol=ttrpc version=3 May 13 03:44:48.642452 systemd[1]: Started cri-containerd-9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9.scope - libcontainer container 9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9. May 13 03:44:48.705679 systemd[1]: cri-containerd-9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9.scope: Deactivated successfully. May 13 03:44:48.709263 containerd[1485]: time="2025-05-13T03:44:48.708950550Z" level=info msg="received exit event container_id:\"9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9\" id:\"9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9\" pid:4572 exited_at:{seconds:1747107888 nanos:708415594}" May 13 03:44:48.709391 containerd[1485]: time="2025-05-13T03:44:48.709275980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9\" id:\"9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9\" pid:4572 exited_at:{seconds:1747107888 nanos:708415594}" May 13 03:44:48.710574 containerd[1485]: time="2025-05-13T03:44:48.710467784Z" level=info msg="StartContainer for \"9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9\" returns successfully" May 13 03:44:48.737706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fd56dd10cacf4673f78fc5ef1796debe947fa8f09990484b766e015710317d9-rootfs.mount: Deactivated successfully. May 13 03:44:49.166339 containerd[1485]: time="2025-05-13T03:44:49.166124893Z" level=warning msg="container event discarded" container=2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439 type=CONTAINER_CREATED_EVENT May 13 03:44:49.241189 kubelet[2718]: E0513 03:44:49.241074 2718 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 03:44:49.255489 containerd[1485]: time="2025-05-13T03:44:49.255340736Z" level=warning msg="container event discarded" container=2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439 type=CONTAINER_STARTED_EVENT May 13 03:44:49.328920 containerd[1485]: time="2025-05-13T03:44:49.328784402Z" level=warning msg="container event discarded" container=2a246280573911fae46b4d73d2c8655cba01db2e2be5fe5c62c4869f705d5439 type=CONTAINER_STOPPED_EVENT May 13 03:44:49.570352 containerd[1485]: time="2025-05-13T03:44:49.568332453Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 03:44:49.594324 containerd[1485]: time="2025-05-13T03:44:49.594115106Z" level=info msg="Container 29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec: CDI devices from CRI Config.CDIDevices: []" May 13 03:44:49.613324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2507657434.mount: Deactivated successfully. May 13 03:44:49.623819 containerd[1485]: time="2025-05-13T03:44:49.620784975Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec\"" May 13 03:44:49.623819 containerd[1485]: time="2025-05-13T03:44:49.622083452Z" level=info msg="StartContainer for \"29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec\"" May 13 03:44:49.628841 containerd[1485]: time="2025-05-13T03:44:49.628761544Z" level=info msg="connecting to shim 29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec" address="unix:///run/containerd/s/40336659e0869ec017bbcc3f5a774c717fc6c33adbfb87e314664c1ce736b012" protocol=ttrpc version=3 May 13 03:44:49.652405 systemd[1]: Started cri-containerd-29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec.scope - libcontainer container 29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec. May 13 03:44:49.686575 systemd[1]: cri-containerd-29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec.scope: Deactivated successfully. May 13 03:44:49.689302 containerd[1485]: time="2025-05-13T03:44:49.688811836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec\" id:\"29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec\" pid:4609 exited_at:{seconds:1747107889 nanos:686985715}" May 13 03:44:49.691607 containerd[1485]: time="2025-05-13T03:44:49.691576981Z" level=info msg="received exit event container_id:\"29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec\" id:\"29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec\" pid:4609 exited_at:{seconds:1747107889 nanos:686985715}" May 13 03:44:49.700073 containerd[1485]: time="2025-05-13T03:44:49.700040076Z" level=info msg="StartContainer for \"29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec\" returns successfully" May 13 03:44:49.717963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29a178a42ff5778832b61eab6e9e4a50165f3273c9704ef61a2bcef51374b6ec-rootfs.mount: Deactivated successfully. May 13 03:44:49.731342 kubelet[2718]: I0513 03:44:49.731217 2718 setters.go:602] "Node became not ready" node="ci-4284-0-0-n-62b177a255.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T03:44:49Z","lastTransitionTime":"2025-05-13T03:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 03:44:50.168310 containerd[1485]: time="2025-05-13T03:44:50.168084357Z" level=warning msg="container event discarded" container=73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd type=CONTAINER_CREATED_EVENT May 13 03:44:50.254759 containerd[1485]: time="2025-05-13T03:44:50.254579941Z" level=warning msg="container event discarded" container=73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd type=CONTAINER_STARTED_EVENT May 13 03:44:50.485832 containerd[1485]: time="2025-05-13T03:44:50.485522816Z" level=warning msg="container event discarded" container=73282d56a72f24905fd871beec5518e2ab7d697c572bef1716ac6586e9b6eddd type=CONTAINER_STOPPED_EVENT May 13 03:44:50.578185 containerd[1485]: time="2025-05-13T03:44:50.578089812Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 03:44:50.612372 containerd[1485]: time="2025-05-13T03:44:50.609497102Z" level=info msg="Container 780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d: CDI devices from CRI Config.CDIDevices: []" May 13 03:44:50.639558 containerd[1485]: time="2025-05-13T03:44:50.639512246Z" level=info msg="CreateContainer within sandbox \"1f06422287d255c045ea99ed24c1a490bda764825eb34f5b528ac36b38d6d7a0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d\"" May 13 03:44:50.640480 containerd[1485]: time="2025-05-13T03:44:50.640437876Z" level=info msg="StartContainer for \"780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d\"" May 13 03:44:50.642785 containerd[1485]: time="2025-05-13T03:44:50.642733970Z" level=info msg="connecting to shim 780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d" address="unix:///run/containerd/s/40336659e0869ec017bbcc3f5a774c717fc6c33adbfb87e314664c1ce736b012" protocol=ttrpc version=3 May 13 03:44:50.676460 systemd[1]: Started cri-containerd-780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d.scope - libcontainer container 780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d. May 13 03:44:50.748392 containerd[1485]: time="2025-05-13T03:44:50.747638979Z" level=info msg="StartContainer for \"780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d\" returns successfully" May 13 03:44:50.764017 containerd[1485]: time="2025-05-13T03:44:50.763931017Z" level=warning msg="container event discarded" container=66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7 type=CONTAINER_CREATED_EVENT May 13 03:44:50.831911 containerd[1485]: time="2025-05-13T03:44:50.831805165Z" level=warning msg="container event discarded" container=66ce90e7f0224ec9ab11f04b8ffb3d29e0ba79ac94c963d4af046aca8de765c7 type=CONTAINER_STARTED_EVENT May 13 03:44:50.859529 containerd[1485]: time="2025-05-13T03:44:50.859476534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d\" id:\"a56d602b11641c8965c21abc674c4e2662a10710cad408c4aa7cda0cf7770eb1\" pid:4677 exited_at:{seconds:1747107890 nanos:858030926}" May 13 03:44:51.162425 containerd[1485]: time="2025-05-13T03:44:51.162363706Z" level=warning msg="container event discarded" container=0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0 type=CONTAINER_CREATED_EVENT May 13 03:44:51.205289 kernel: cryptd: max_cpu_qlen set to 1000 May 13 03:44:51.258269 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 13 03:44:51.266614 containerd[1485]: time="2025-05-13T03:44:51.266540415Z" level=warning msg="container event discarded" container=0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0 type=CONTAINER_STARTED_EVENT May 13 03:44:51.406724 containerd[1485]: time="2025-05-13T03:44:51.406647552Z" level=warning msg="container event discarded" container=0bd994327bd371eb04fb58ebcb4aab99642e74da74b4b7e8a93377381e71e8e0 type=CONTAINER_STOPPED_EVENT May 13 03:44:51.640865 kubelet[2718]: I0513 03:44:51.640687 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xpcds" podStartSLOduration=7.640103324 podStartE2EDuration="7.640103324s" podCreationTimestamp="2025-05-13 03:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 03:44:51.63661364 +0000 UTC m=+317.890233306" watchObservedRunningTime="2025-05-13 03:44:51.640103324 +0000 UTC m=+317.893722981" May 13 03:44:52.212281 containerd[1485]: time="2025-05-13T03:44:52.212087793Z" level=warning msg="container event discarded" container=efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857 type=CONTAINER_CREATED_EVENT May 13 03:44:52.290783 containerd[1485]: time="2025-05-13T03:44:52.290650948Z" level=warning msg="container event discarded" container=efc23b252c4d9acfe79c4b113c20871fddcc11dbdd6b80ffefa0e618c7c43857 type=CONTAINER_STARTED_EVENT May 13 03:44:53.110760 containerd[1485]: time="2025-05-13T03:44:53.110544603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d\" id:\"014669e585773b005200745d3fef7a3f91ffb09d75c93bece3bac668f70984b0\" pid:4876 exit_status:1 exited_at:{seconds:1747107893 nanos:109754651}" May 13 03:44:54.846517 systemd-networkd[1381]: lxc_health: Link UP May 13 03:44:54.857580 systemd-networkd[1381]: lxc_health: Gained carrier May 13 03:44:55.286806 containerd[1485]: time="2025-05-13T03:44:55.286685983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d\" id:\"eb26a4e26661ee0222f7063c0fe1c67e026688cfd5363feedcb825ccd3d5729f\" pid:5247 exited_at:{seconds:1747107895 nanos:286344343}" May 13 03:44:56.829540 systemd-networkd[1381]: lxc_health: Gained IPv6LL May 13 03:44:57.492217 containerd[1485]: time="2025-05-13T03:44:57.492163392Z" level=info msg="TaskExit event in podsandbox handler container_id:\"780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d\" id:\"d42753e5b73ccabdca95cb15832ca320c81c79e1158d2b000206464055c13983\" pid:5284 exited_at:{seconds:1747107897 nanos:491700281}" May 13 03:44:59.721351 containerd[1485]: time="2025-05-13T03:44:59.720853322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d\" id:\"403ae327cf8daf7cb1ea518e2330de1c213c188ac5d8abaac3a3f5977a81bcf8\" pid:5314 exited_at:{seconds:1747107899 nanos:720332410}" May 13 03:45:00.644726 containerd[1485]: time="2025-05-13T03:45:00.644518043Z" level=warning msg="container event discarded" container=31ecdbf2feb1963c3bfdf33f9a7393f704c1ee9b7a113d54250bd23d405b4703 type=CONTAINER_CREATED_EVENT May 13 03:45:00.644726 containerd[1485]: time="2025-05-13T03:45:00.644633032Z" level=warning msg="container event discarded" container=31ecdbf2feb1963c3bfdf33f9a7393f704c1ee9b7a113d54250bd23d405b4703 type=CONTAINER_STARTED_EVENT May 13 03:45:00.679126 containerd[1485]: time="2025-05-13T03:45:00.678997774Z" level=warning msg="container event discarded" container=1eafc664ca776727d015364cf79152e12a7e05fd2d76e7450685bde32f91e19d type=CONTAINER_CREATED_EVENT May 13 03:45:00.732829 containerd[1485]: time="2025-05-13T03:45:00.732623544Z" level=warning msg="container event discarded" container=26fa62e6569492b1671fadab4b874167b627feac724aa59502c3cdab26d95a36 type=CONTAINER_CREATED_EVENT May 13 03:45:00.732829 containerd[1485]: time="2025-05-13T03:45:00.732753040Z" level=warning msg="container event discarded" container=26fa62e6569492b1671fadab4b874167b627feac724aa59502c3cdab26d95a36 type=CONTAINER_STARTED_EVENT May 13 03:45:00.768306 containerd[1485]: time="2025-05-13T03:45:00.768141712Z" level=warning msg="container event discarded" container=636a58c68363fcd35606ab5ddd244f82ef105da0162764693ccc603dd46a2edc type=CONTAINER_CREATED_EVENT May 13 03:45:00.768499 containerd[1485]: time="2025-05-13T03:45:00.768303841Z" level=warning msg="container event discarded" container=1eafc664ca776727d015364cf79152e12a7e05fd2d76e7450685bde32f91e19d type=CONTAINER_STARTED_EVENT May 13 03:45:00.849752 containerd[1485]: time="2025-05-13T03:45:00.849618298Z" level=warning msg="container event discarded" container=636a58c68363fcd35606ab5ddd244f82ef105da0162764693ccc603dd46a2edc type=CONTAINER_STARTED_EVENT May 13 03:45:01.951176 containerd[1485]: time="2025-05-13T03:45:01.951125014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"780b145f962fd37940910b8f92776c568051a22c07bbb32c09d5233d558c2b9d\" id:\"7742049dd94f68836d66707e7543d771b301c4a14292ebe8acee5c2045a4697e\" pid:5340 exited_at:{seconds:1747107901 nanos:950595706}" May 13 03:45:01.954823 kubelet[2718]: E0513 03:45:01.954655 2718 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47408->127.0.0.1:37743: write tcp 127.0.0.1:47408->127.0.0.1:37743: write: broken pipe May 13 03:45:02.298312 sshd[4553]: Connection closed by 172.24.4.1 port 59806 May 13 03:45:02.300077 sshd-session[4504]: pam_unix(sshd:session): session closed for user core May 13 03:45:02.309716 systemd[1]: sshd@26-172.24.4.174:22-172.24.4.1:59806.service: Deactivated successfully. May 13 03:45:02.315566 systemd[1]: session-29.scope: Deactivated successfully. May 13 03:45:02.318277 systemd-logind[1468]: Session 29 logged out. Waiting for processes to exit. May 13 03:45:02.321134 systemd-logind[1468]: Removed session 29. May 13 03:45:34.051834 containerd[1485]: time="2025-05-13T03:45:34.051130430Z" level=info msg="StopPodSandbox for \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\"" May 13 03:45:34.051834 containerd[1485]: time="2025-05-13T03:45:34.051569988Z" level=info msg="TearDown network for sandbox \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" successfully" May 13 03:45:34.051834 containerd[1485]: time="2025-05-13T03:45:34.051625555Z" level=info msg="StopPodSandbox for \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" returns successfully" May 13 03:45:34.055617 containerd[1485]: time="2025-05-13T03:45:34.053643667Z" level=info msg="RemovePodSandbox for \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\"" May 13 03:45:34.055617 containerd[1485]: time="2025-05-13T03:45:34.053726064Z" level=info msg="Forcibly stopping sandbox \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\"" May 13 03:45:34.055617 containerd[1485]: time="2025-05-13T03:45:34.053936686Z" level=info msg="TearDown network for sandbox \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" successfully" May 13 03:45:34.058692 containerd[1485]: time="2025-05-13T03:45:34.058633342Z" level=info msg="Ensure that sandbox e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338 in task-service has been cleanup successfully" May 13 03:45:34.065481 containerd[1485]: time="2025-05-13T03:45:34.065358289Z" level=info msg="RemovePodSandbox \"e582aaf94acdbb55a81479ff6ab6f7629cae08cc02c326520d1f004fb7554338\" returns successfully" May 13 03:45:34.067171 containerd[1485]: time="2025-05-13T03:45:34.067072281Z" level=info msg="StopPodSandbox for \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\"" May 13 03:45:34.067535 containerd[1485]: time="2025-05-13T03:45:34.067467976Z" level=info msg="TearDown network for sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" successfully" May 13 03:45:34.067535 containerd[1485]: time="2025-05-13T03:45:34.067520557Z" level=info msg="StopPodSandbox for \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" returns successfully" May 13 03:45:34.068491 containerd[1485]: time="2025-05-13T03:45:34.068317628Z" level=info msg="RemovePodSandbox for \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\"" May 13 03:45:34.068491 containerd[1485]: time="2025-05-13T03:45:34.068390928Z" level=info msg="Forcibly stopping sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\"" May 13 03:45:34.071800 containerd[1485]: time="2025-05-13T03:45:34.071719753Z" level=info msg="TearDown network for sandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" successfully" May 13 03:45:34.080605 containerd[1485]: time="2025-05-13T03:45:34.080524840Z" level=info msg="Ensure that sandbox d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19 in task-service has been cleanup successfully" May 13 03:45:34.098046 containerd[1485]: time="2025-05-13T03:45:34.097913413Z" level=info msg="RemovePodSandbox \"d39e13ef00390611776c6811cc2e77a0d804041695a596ca3e1e15c02254af19\" returns successfully"