May 14 00:04:33.074748 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 14 00:04:33.074782 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:04:33.074799 kernel: BIOS-provided physical RAM map: May 14 00:04:33.074810 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 00:04:33.074820 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 00:04:33.074830 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 00:04:33.074843 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 14 00:04:33.074853 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 14 00:04:33.074866 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 14 00:04:33.074877 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 14 00:04:33.074887 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 00:04:33.074897 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 00:04:33.074908 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 00:04:33.074919 kernel: NX (Execute Disable) protection: active May 14 00:04:33.074932 kernel: APIC: Static calls initialized May 14 00:04:33.074945 kernel: SMBIOS 3.0.0 present. May 14 00:04:33.074957 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 14 00:04:33.074969 kernel: Hypervisor detected: KVM May 14 00:04:33.074980 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 00:04:33.074991 kernel: kvm-clock: using sched offset of 3189353250 cycles May 14 00:04:33.075003 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 00:04:33.075015 kernel: tsc: Detected 2495.310 MHz processor May 14 00:04:33.075055 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 00:04:33.075067 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 00:04:33.075082 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 14 00:04:33.075094 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 00:04:33.075106 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 00:04:33.075118 kernel: Using GB pages for direct mapping May 14 00:04:33.075130 kernel: ACPI: Early table checksum verification disabled May 14 00:04:33.075141 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 14 00:04:33.075153 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:04:33.075165 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:04:33.075177 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:04:33.075190 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 14 00:04:33.075202 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:04:33.075214 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:04:33.075226 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:04:33.075237 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:04:33.075249 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] May 14 00:04:33.075261 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] May 14 00:04:33.075277 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 14 00:04:33.075290 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] May 14 00:04:33.075302 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] May 14 00:04:33.075314 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] May 14 00:04:33.075327 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] May 14 00:04:33.075338 kernel: No NUMA configuration found May 14 00:04:33.075350 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 14 00:04:33.075365 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] May 14 00:04:33.075377 kernel: Zone ranges: May 14 00:04:33.075389 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 00:04:33.075401 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 14 00:04:33.075413 kernel: Normal empty May 14 00:04:33.075425 kernel: Movable zone start for each node May 14 00:04:33.075437 kernel: Early memory node ranges May 14 00:04:33.075459 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 00:04:33.075471 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 14 00:04:33.075486 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 14 00:04:33.075499 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 00:04:33.075510 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 00:04:33.075523 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 14 00:04:33.075535 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 00:04:33.075547 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 00:04:33.075559 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 00:04:33.075571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 00:04:33.075583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 00:04:33.075595 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 00:04:33.075609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 00:04:33.075621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 00:04:33.075633 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 00:04:33.075646 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 00:04:33.075658 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 14 00:04:33.075670 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 00:04:33.075682 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 14 00:04:33.075694 kernel: Booting paravirtualized kernel on KVM May 14 00:04:33.075706 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 00:04:33.075721 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 00:04:33.075733 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 14 00:04:33.075745 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 14 00:04:33.075757 kernel: pcpu-alloc: [0] 0 1 May 14 00:04:33.075768 kernel: kvm-guest: PV spinlocks disabled, no host support May 14 00:04:33.075783 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:04:33.075796 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:04:33.075808 kernel: random: crng init done May 14 00:04:33.075822 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:04:33.075835 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 14 00:04:33.075847 kernel: Fallback order for Node 0: 0 May 14 00:04:33.075859 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 May 14 00:04:33.075871 kernel: Policy zone: DMA32 May 14 00:04:33.075883 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:04:33.075896 kernel: Memory: 1917956K/2047464K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 129248K reserved, 0K cma-reserved) May 14 00:04:33.075908 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 00:04:33.075920 kernel: ftrace: allocating 37993 entries in 149 pages May 14 00:04:33.075934 kernel: ftrace: allocated 149 pages with 4 groups May 14 00:04:33.075946 kernel: Dynamic Preempt: voluntary May 14 00:04:33.075958 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:04:33.075971 kernel: rcu: RCU event tracing is enabled. May 14 00:04:33.075984 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 00:04:33.075996 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:04:33.076008 kernel: Rude variant of Tasks RCU enabled. May 14 00:04:33.076020 kernel: Tracing variant of Tasks RCU enabled. May 14 00:04:33.076057 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:04:33.076072 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 00:04:33.076084 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 14 00:04:33.076096 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 00:04:33.076108 kernel: Console: colour VGA+ 80x25 May 14 00:04:33.076120 kernel: printk: console [tty0] enabled May 14 00:04:33.076132 kernel: printk: console [ttyS0] enabled May 14 00:04:33.076144 kernel: ACPI: Core revision 20230628 May 14 00:04:33.076156 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 00:04:33.076169 kernel: APIC: Switch to symmetric I/O mode setup May 14 00:04:33.076183 kernel: x2apic enabled May 14 00:04:33.076195 kernel: APIC: Switched APIC routing to: physical x2apic May 14 00:04:33.076207 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 00:04:33.076219 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 00:04:33.076232 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) May 14 00:04:33.076244 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 00:04:33.076256 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 00:04:33.076268 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 00:04:33.076290 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 00:04:33.076303 kernel: Spectre V2 : Mitigation: Retpolines May 14 00:04:33.076316 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 00:04:33.076328 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 00:04:33.076343 kernel: RETBleed: Mitigation: untrained return thunk May 14 00:04:33.076355 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 00:04:33.076368 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 00:04:33.076381 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 00:04:33.076394 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 00:04:33.076409 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 00:04:33.076421 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 00:04:33.076434 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 00:04:33.076455 kernel: Freeing SMP alternatives memory: 32K May 14 00:04:33.076468 kernel: pid_max: default: 32768 minimum: 301 May 14 00:04:33.076480 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 00:04:33.076493 kernel: landlock: Up and running. May 14 00:04:33.076506 kernel: SELinux: Initializing. May 14 00:04:33.076518 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 00:04:33.076533 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 00:04:33.076546 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 00:04:33.076559 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 00:04:33.076572 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 00:04:33.076585 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 00:04:33.076598 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 00:04:33.076610 kernel: ... version: 0 May 14 00:04:33.076623 kernel: ... bit width: 48 May 14 00:04:33.076637 kernel: ... generic registers: 6 May 14 00:04:33.076650 kernel: ... value mask: 0000ffffffffffff May 14 00:04:33.076662 kernel: ... max period: 00007fffffffffff May 14 00:04:33.076675 kernel: ... fixed-purpose events: 0 May 14 00:04:33.076687 kernel: ... event mask: 000000000000003f May 14 00:04:33.076700 kernel: signal: max sigframe size: 1776 May 14 00:04:33.076712 kernel: rcu: Hierarchical SRCU implementation. May 14 00:04:33.076725 kernel: rcu: Max phase no-delay instances is 400. May 14 00:04:33.076737 kernel: smp: Bringing up secondary CPUs ... May 14 00:04:33.076750 kernel: smpboot: x86: Booting SMP configuration: May 14 00:04:33.076764 kernel: .... node #0, CPUs: #1 May 14 00:04:33.076777 kernel: smp: Brought up 1 node, 2 CPUs May 14 00:04:33.076789 kernel: smpboot: Max logical packages: 1 May 14 00:04:33.076802 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) May 14 00:04:33.076814 kernel: devtmpfs: initialized May 14 00:04:33.076827 kernel: x86/mm: Memory block size: 128MB May 14 00:04:33.076840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:04:33.076853 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 00:04:33.076865 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:04:33.076880 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:04:33.076893 kernel: audit: initializing netlink subsys (disabled) May 14 00:04:33.076906 kernel: audit: type=2000 audit(1747181071.443:1): state=initialized audit_enabled=0 res=1 May 14 00:04:33.076918 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:04:33.076936 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 00:04:33.076953 kernel: cpuidle: using governor menu May 14 00:04:33.076971 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:04:33.076984 kernel: dca service started, version 1.12.1 May 14 00:04:33.076997 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 14 00:04:33.077013 kernel: PCI: Using configuration type 1 for base access May 14 00:04:33.077074 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 00:04:33.077103 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:04:33.077116 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 00:04:33.077129 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:04:33.077142 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 00:04:33.077154 kernel: ACPI: Added _OSI(Module Device) May 14 00:04:33.077167 kernel: ACPI: Added _OSI(Processor Device) May 14 00:04:33.077179 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:04:33.077196 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:04:33.077208 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:04:33.077221 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 00:04:33.077234 kernel: ACPI: Interpreter enabled May 14 00:04:33.077246 kernel: ACPI: PM: (supports S0 S5) May 14 00:04:33.077259 kernel: ACPI: Using IOAPIC for interrupt routing May 14 00:04:33.077271 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 00:04:33.077284 kernel: PCI: Using E820 reservations for host bridge windows May 14 00:04:33.077297 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 00:04:33.077312 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:04:33.077548 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:04:33.077687 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 00:04:33.077815 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 00:04:33.077833 kernel: PCI host bridge to bus 0000:00 May 14 00:04:33.077964 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 00:04:33.078126 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 00:04:33.078279 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 00:04:33.078395 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 14 00:04:33.078568 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 00:04:33.078685 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 14 00:04:33.078796 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:04:33.078941 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 00:04:33.079129 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 May 14 00:04:33.079262 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] May 14 00:04:33.079391 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] May 14 00:04:33.079532 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] May 14 00:04:33.079661 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] May 14 00:04:33.079787 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 00:04:33.079924 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 14 00:04:33.080093 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] May 14 00:04:33.080232 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 14 00:04:33.080361 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] May 14 00:04:33.080511 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 14 00:04:33.080642 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] May 14 00:04:33.080781 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 14 00:04:33.080916 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] May 14 00:04:33.081087 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 14 00:04:33.081219 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] May 14 00:04:33.081352 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 14 00:04:33.081492 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] May 14 00:04:33.081626 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 14 00:04:33.081759 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] May 14 00:04:33.081892 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 14 00:04:33.082089 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] May 14 00:04:33.082235 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 14 00:04:33.082364 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] May 14 00:04:33.082513 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 00:04:33.082648 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 00:04:33.082781 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 00:04:33.082909 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] May 14 00:04:33.083144 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] May 14 00:04:33.083281 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 00:04:33.083407 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 14 00:04:33.083558 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 14 00:04:33.083698 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] May 14 00:04:33.083828 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 14 00:04:33.083957 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] May 14 00:04:33.084105 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 14 00:04:33.084233 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 14 00:04:33.084358 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 14 00:04:33.084510 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 14 00:04:33.084648 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] May 14 00:04:33.084776 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 14 00:04:33.084901 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 14 00:04:33.085076 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 14 00:04:33.085224 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 14 00:04:33.085355 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] May 14 00:04:33.085506 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] May 14 00:04:33.085634 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 14 00:04:33.085759 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 14 00:04:33.085888 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 14 00:04:33.086126 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 14 00:04:33.086272 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 14 00:04:33.086432 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 14 00:04:33.086647 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 14 00:04:33.086801 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 14 00:04:33.086953 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 14 00:04:33.087150 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] May 14 00:04:33.087288 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] May 14 00:04:33.087419 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 14 00:04:33.087568 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 14 00:04:33.087697 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 14 00:04:33.087847 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 14 00:04:33.087982 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] May 14 00:04:33.088174 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] May 14 00:04:33.088359 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 14 00:04:33.088534 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 14 00:04:33.088661 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 14 00:04:33.088677 kernel: acpiphp: Slot [0] registered May 14 00:04:33.088820 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 14 00:04:33.088955 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] May 14 00:04:33.089144 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] May 14 00:04:33.089274 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] May 14 00:04:33.089400 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 14 00:04:33.089541 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 14 00:04:33.089665 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 14 00:04:33.089682 kernel: acpiphp: Slot [0-2] registered May 14 00:04:33.089810 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 14 00:04:33.089933 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 14 00:04:33.090080 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 14 00:04:33.090098 kernel: acpiphp: Slot [0-3] registered May 14 00:04:33.090223 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 14 00:04:33.090348 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 14 00:04:33.090487 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 14 00:04:33.090504 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 00:04:33.090522 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 00:04:33.090535 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 00:04:33.090548 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 00:04:33.090561 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 00:04:33.090574 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 00:04:33.090587 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 00:04:33.090600 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 00:04:33.090612 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 00:04:33.090625 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 00:04:33.090640 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 00:04:33.090653 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 00:04:33.090666 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 00:04:33.090679 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 00:04:33.090692 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 00:04:33.090705 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 00:04:33.090718 kernel: iommu: Default domain type: Translated May 14 00:04:33.090731 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 00:04:33.090743 kernel: PCI: Using ACPI for IRQ routing May 14 00:04:33.090758 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 00:04:33.090771 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 00:04:33.090784 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 14 00:04:33.090913 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 00:04:33.091110 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 00:04:33.091273 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 00:04:33.091291 kernel: vgaarb: loaded May 14 00:04:33.091305 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 00:04:33.091318 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 00:04:33.091337 kernel: clocksource: Switched to clocksource kvm-clock May 14 00:04:33.091349 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:04:33.091363 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:04:33.091375 kernel: pnp: PnP ACPI init May 14 00:04:33.091531 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 14 00:04:33.091550 kernel: pnp: PnP ACPI: found 5 devices May 14 00:04:33.091564 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 00:04:33.091577 kernel: NET: Registered PF_INET protocol family May 14 00:04:33.091593 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:04:33.091606 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 14 00:04:33.091619 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:04:33.091632 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 14 00:04:33.091645 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 14 00:04:33.091658 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 14 00:04:33.091671 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 00:04:33.091684 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 00:04:33.091696 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:04:33.091711 kernel: NET: Registered PF_XDP protocol family May 14 00:04:33.091840 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 14 00:04:33.091967 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 14 00:04:33.092165 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 14 00:04:33.092292 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] May 14 00:04:33.092416 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] May 14 00:04:33.092558 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] May 14 00:04:33.092689 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 14 00:04:33.092815 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 14 00:04:33.092901 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 14 00:04:33.092976 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 14 00:04:33.093068 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 14 00:04:33.093140 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 14 00:04:33.093213 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 14 00:04:33.093286 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 14 00:04:33.093361 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 14 00:04:33.093436 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 14 00:04:33.093517 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 14 00:04:33.093589 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 14 00:04:33.093662 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 14 00:04:33.093736 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 14 00:04:33.093813 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 14 00:04:33.093898 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 14 00:04:33.093974 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 14 00:04:33.094077 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 14 00:04:33.094150 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 14 00:04:33.094221 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 14 00:04:33.094293 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 14 00:04:33.094364 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 14 00:04:33.094438 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 14 00:04:33.094521 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 14 00:04:33.094595 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 14 00:04:33.094672 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 14 00:04:33.094747 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 14 00:04:33.094820 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 14 00:04:33.094893 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 14 00:04:33.094971 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 14 00:04:33.095057 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 00:04:33.095124 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 00:04:33.095194 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 00:04:33.095259 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 14 00:04:33.095325 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 14 00:04:33.095391 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 14 00:04:33.095476 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 14 00:04:33.095546 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 14 00:04:33.095620 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 14 00:04:33.095689 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 14 00:04:33.095762 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 14 00:04:33.095833 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 14 00:04:33.095913 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 14 00:04:33.096014 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 14 00:04:33.096186 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 14 00:04:33.096256 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 14 00:04:33.096330 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 14 00:04:33.096404 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 14 00:04:33.096490 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 14 00:04:33.096560 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 14 00:04:33.096629 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 14 00:04:33.096702 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 14 00:04:33.096771 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 14 00:04:33.096837 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 14 00:04:33.096914 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 14 00:04:33.096983 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 14 00:04:33.097066 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 14 00:04:33.097078 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 00:04:33.097086 kernel: PCI: CLS 0 bytes, default 64 May 14 00:04:33.097094 kernel: Initialise system trusted keyrings May 14 00:04:33.097102 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 14 00:04:33.097111 kernel: Key type asymmetric registered May 14 00:04:33.097121 kernel: Asymmetric key parser 'x509' registered May 14 00:04:33.097129 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 00:04:33.097137 kernel: io scheduler mq-deadline registered May 14 00:04:33.097148 kernel: io scheduler kyber registered May 14 00:04:33.097158 kernel: io scheduler bfq registered May 14 00:04:33.097254 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 14 00:04:33.097332 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 14 00:04:33.097410 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 14 00:04:33.097497 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 14 00:04:33.097579 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 14 00:04:33.097655 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 14 00:04:33.097730 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 14 00:04:33.097803 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 14 00:04:33.097882 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 14 00:04:33.097992 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 14 00:04:33.098133 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 14 00:04:33.098210 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 14 00:04:33.098290 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 14 00:04:33.098363 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 14 00:04:33.098439 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 14 00:04:33.098526 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 14 00:04:33.098537 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 00:04:33.098614 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 14 00:04:33.098690 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 14 00:04:33.098700 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 00:04:33.098709 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 14 00:04:33.098721 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:04:33.098729 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 00:04:33.098737 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 00:04:33.098744 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 00:04:33.098752 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 00:04:33.098760 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 00:04:33.098839 kernel: rtc_cmos 00:03: RTC can wake from S4 May 14 00:04:33.098910 kernel: rtc_cmos 00:03: registered as rtc0 May 14 00:04:33.098983 kernel: rtc_cmos 00:03: setting system clock to 2025-05-14T00:04:32 UTC (1747181072) May 14 00:04:33.099068 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 14 00:04:33.099079 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 00:04:33.099088 kernel: NET: Registered PF_INET6 protocol family May 14 00:04:33.099096 kernel: Segment Routing with IPv6 May 14 00:04:33.099103 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:04:33.099111 kernel: NET: Registered PF_PACKET protocol family May 14 00:04:33.099119 kernel: Key type dns_resolver registered May 14 00:04:33.099130 kernel: IPI shorthand broadcast: enabled May 14 00:04:33.099137 kernel: sched_clock: Marking stable (1421011371, 143844683)->(1576709286, -11853232) May 14 00:04:33.099145 kernel: registered taskstats version 1 May 14 00:04:33.099153 kernel: Loading compiled-in X.509 certificates May 14 00:04:33.099161 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 14 00:04:33.099169 kernel: Key type .fscrypt registered May 14 00:04:33.099176 kernel: Key type fscrypt-provisioning registered May 14 00:04:33.099184 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:04:33.099192 kernel: ima: Allocated hash algorithm: sha1 May 14 00:04:33.099202 kernel: ima: No architecture policies found May 14 00:04:33.099210 kernel: clk: Disabling unused clocks May 14 00:04:33.099217 kernel: Freeing unused kernel image (initmem) memory: 43604K May 14 00:04:33.099225 kernel: Write protecting the kernel read-only data: 40960k May 14 00:04:33.099233 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 14 00:04:33.099242 kernel: Run /init as init process May 14 00:04:33.099249 kernel: with arguments: May 14 00:04:33.099257 kernel: /init May 14 00:04:33.099265 kernel: with environment: May 14 00:04:33.099274 kernel: HOME=/ May 14 00:04:33.099282 kernel: TERM=linux May 14 00:04:33.099290 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:04:33.099298 systemd[1]: Successfully made /usr/ read-only. May 14 00:04:33.099310 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:04:33.099319 systemd[1]: Detected virtualization kvm. May 14 00:04:33.099328 systemd[1]: Detected architecture x86-64. May 14 00:04:33.099336 systemd[1]: Running in initrd. May 14 00:04:33.099346 systemd[1]: No hostname configured, using default hostname. May 14 00:04:33.099354 systemd[1]: Hostname set to . May 14 00:04:33.099363 systemd[1]: Initializing machine ID from VM UUID. May 14 00:04:33.099371 systemd[1]: Queued start job for default target initrd.target. May 14 00:04:33.099382 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:04:33.099390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:04:33.099399 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 00:04:33.099407 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:04:33.099417 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 00:04:33.099426 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 00:04:33.099435 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 00:04:33.099453 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 00:04:33.099464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:04:33.099472 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:04:33.099482 systemd[1]: Reached target paths.target - Path Units. May 14 00:04:33.099490 systemd[1]: Reached target slices.target - Slice Units. May 14 00:04:33.099499 systemd[1]: Reached target swap.target - Swaps. May 14 00:04:33.099507 systemd[1]: Reached target timers.target - Timer Units. May 14 00:04:33.099516 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:04:33.099524 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:04:33.099533 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 00:04:33.099541 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 00:04:33.099550 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:04:33.099560 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:04:33.099568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:04:33.099576 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:04:33.099585 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 00:04:33.099593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:04:33.099602 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 00:04:33.099610 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:04:33.099618 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:04:33.099626 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:04:33.099636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:04:33.099644 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 00:04:33.099653 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:04:33.099662 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:04:33.099671 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:04:33.099680 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:04:33.099688 kernel: Bridge firewalling registered May 14 00:04:33.099719 systemd-journald[187]: Collecting audit messages is disabled. May 14 00:04:33.099743 systemd-journald[187]: Journal started May 14 00:04:33.099763 systemd-journald[187]: Runtime Journal (/run/log/journal/479fabf3dea0400b8a01cccaba15f131) is 4.7M, max 38.3M, 33.5M free. May 14 00:04:33.033841 systemd-modules-load[189]: Inserted module 'overlay' May 14 00:04:33.113242 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:04:33.072066 systemd-modules-load[189]: Inserted module 'br_netfilter' May 14 00:04:33.116442 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:04:33.117587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:04:33.122124 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:04:33.125117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:04:33.128298 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:04:33.129003 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:04:33.141370 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:04:33.142928 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:04:33.151356 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:04:33.155097 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:04:33.158520 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 00:04:33.167197 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:04:33.168943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:04:33.173571 dracut-cmdline[223]: dracut-dracut-053 May 14 00:04:33.175579 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:04:33.210363 systemd-resolved[224]: Positive Trust Anchors: May 14 00:04:33.210966 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:04:33.210997 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:04:33.219539 systemd-resolved[224]: Defaulting to hostname 'linux'. May 14 00:04:33.220431 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:04:33.221157 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:04:33.230060 kernel: SCSI subsystem initialized May 14 00:04:33.238063 kernel: Loading iSCSI transport class v2.0-870. May 14 00:04:33.248669 kernel: iscsi: registered transport (tcp) May 14 00:04:33.282728 kernel: iscsi: registered transport (qla4xxx) May 14 00:04:33.282804 kernel: QLogic iSCSI HBA Driver May 14 00:04:33.326605 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 00:04:33.330238 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 00:04:33.384290 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:04:33.384377 kernel: device-mapper: uevent: version 1.0.3 May 14 00:04:33.386381 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 00:04:33.445120 kernel: raid6: avx2x4 gen() 13828 MB/s May 14 00:04:33.463098 kernel: raid6: avx2x2 gen() 17250 MB/s May 14 00:04:33.482381 kernel: raid6: avx2x1 gen() 15070 MB/s May 14 00:04:33.482485 kernel: raid6: using algorithm avx2x2 gen() 17250 MB/s May 14 00:04:33.500322 kernel: raid6: .... xor() 20248 MB/s, rmw enabled May 14 00:04:33.500417 kernel: raid6: using avx2x2 recovery algorithm May 14 00:04:33.520852 kernel: xor: automatically using best checksumming function avx May 14 00:04:33.661091 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 00:04:33.676090 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 00:04:33.680481 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:04:33.703622 systemd-udevd[408]: Using default interface naming scheme 'v255'. May 14 00:04:33.707702 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:04:33.713782 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 00:04:33.732367 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation May 14 00:04:33.762445 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:04:33.765380 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:04:33.810337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:04:33.817213 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 00:04:33.854392 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 00:04:33.857853 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:04:33.859396 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:04:33.861197 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:04:33.867767 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 00:04:33.889101 kernel: scsi host0: Virtio SCSI HBA May 14 00:04:33.889273 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 14 00:04:33.892619 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 00:04:33.913041 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:04:33.939038 kernel: libata version 3.00 loaded. May 14 00:04:33.945038 kernel: ahci 0000:00:1f.2: version 3.0 May 14 00:04:33.963085 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 00:04:33.963147 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 00:04:33.963340 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 00:04:33.970091 kernel: scsi host1: ahci May 14 00:04:33.970280 kernel: scsi host2: ahci May 14 00:04:33.970376 kernel: scsi host3: ahci May 14 00:04:33.973294 kernel: scsi host4: ahci May 14 00:04:33.978347 kernel: AVX2 version of gcm_enc/dec engaged. May 14 00:04:33.978398 kernel: AES CTR mode by8 optimization enabled May 14 00:04:33.980063 kernel: scsi host5: ahci May 14 00:04:33.978859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:04:33.978975 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:04:33.980591 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:04:33.981051 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:04:33.981161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:04:33.985269 kernel: scsi host6: ahci May 14 00:04:33.985438 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 42 May 14 00:04:33.985461 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 42 May 14 00:04:33.984763 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:04:34.006542 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 42 May 14 00:04:34.006567 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 42 May 14 00:04:34.006576 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 42 May 14 00:04:34.006585 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 42 May 14 00:04:34.006594 kernel: ACPI: bus type USB registered May 14 00:04:34.006603 kernel: usbcore: registered new interface driver usbfs May 14 00:04:34.006612 kernel: usbcore: registered new interface driver hub May 14 00:04:34.006621 kernel: usbcore: registered new device driver usb May 14 00:04:34.008304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:04:34.058233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:04:34.060559 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:04:34.084537 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:04:34.309057 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 00:04:34.309159 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 00:04:34.309182 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 00:04:34.314064 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 00:04:34.314123 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 14 00:04:34.317088 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 00:04:34.320067 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 00:04:34.322371 kernel: ata1.00: applying bridge limits May 14 00:04:34.324553 kernel: ata1.00: configured for UDMA/100 May 14 00:04:34.331071 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 00:04:34.383640 kernel: sd 0:0:0:0: Power-on or device reset occurred May 14 00:04:34.389222 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 14 00:04:34.389490 kernel: sd 0:0:0:0: [sda] Write Protect is off May 14 00:04:34.389652 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 14 00:04:34.395190 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 14 00:04:34.409638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:04:34.409848 kernel: GPT:17805311 != 80003071 May 14 00:04:34.409901 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:04:34.409947 kernel: GPT:17805311 != 80003071 May 14 00:04:34.409987 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:04:34.410091 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 00:04:34.413333 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 14 00:04:34.426105 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 00:04:34.426297 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 14 00:04:34.431053 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 14 00:04:34.435287 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 00:04:34.435477 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 14 00:04:34.435573 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 14 00:04:34.439121 kernel: hub 1-0:1.0: USB hub found May 14 00:04:34.439313 kernel: hub 1-0:1.0: 4 ports detected May 14 00:04:34.444496 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 14 00:04:34.444696 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 00:04:34.444811 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 00:04:34.445038 kernel: hub 2-0:1.0: USB hub found May 14 00:04:34.462092 kernel: hub 2-0:1.0: 4 ports detected May 14 00:04:34.466043 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 14 00:04:34.478067 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (457) May 14 00:04:34.488094 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 14 00:04:34.500489 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 14 00:04:34.502416 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 14 00:04:34.506057 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (462) May 14 00:04:34.514531 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 14 00:04:34.532214 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 00:04:34.534770 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 00:04:34.555055 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 00:04:34.555419 disk-uuid[575]: Primary Header is updated. May 14 00:04:34.555419 disk-uuid[575]: Secondary Entries is updated. May 14 00:04:34.555419 disk-uuid[575]: Secondary Header is updated. May 14 00:04:34.689068 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 14 00:04:34.834092 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:04:34.843167 kernel: usbcore: registered new interface driver usbhid May 14 00:04:34.843237 kernel: usbhid: USB HID core driver May 14 00:04:34.855992 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 May 14 00:04:34.856084 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 14 00:04:35.575325 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 00:04:35.575398 disk-uuid[576]: The operation has completed successfully. May 14 00:04:35.655911 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:04:35.656058 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 00:04:35.712341 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 00:04:35.725622 sh[593]: Success May 14 00:04:35.740060 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 00:04:35.809671 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 00:04:35.816156 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 00:04:35.828769 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 00:04:35.846251 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 14 00:04:35.846325 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 00:04:35.851303 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 00:04:35.851353 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 00:04:35.853597 kernel: BTRFS info (device dm-0): using free space tree May 14 00:04:35.866060 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 14 00:04:35.869159 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 00:04:35.871195 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 00:04:35.874392 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 00:04:35.880233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 00:04:35.922224 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:04:35.922294 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:04:35.922312 kernel: BTRFS info (device sda6): using free space tree May 14 00:04:35.930525 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 00:04:35.930597 kernel: BTRFS info (device sda6): auto enabling async discard May 14 00:04:35.938068 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:04:35.942320 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 00:04:35.946314 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 00:04:36.013079 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:04:36.020849 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:04:36.053610 ignition[719]: Ignition 2.20.0 May 14 00:04:36.054328 ignition[719]: Stage: fetch-offline May 14 00:04:36.054802 ignition[719]: no configs at "/usr/lib/ignition/base.d" May 14 00:04:36.055281 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:04:36.055382 ignition[719]: parsed url from cmdline: "" May 14 00:04:36.055385 ignition[719]: no config URL provided May 14 00:04:36.055390 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:04:36.057531 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:04:36.055395 ignition[719]: no config at "/usr/lib/ignition/user.ign" May 14 00:04:36.055400 ignition[719]: failed to fetch config: resource requires networking May 14 00:04:36.055625 ignition[719]: Ignition finished successfully May 14 00:04:36.073356 systemd-networkd[771]: lo: Link UP May 14 00:04:36.073365 systemd-networkd[771]: lo: Gained carrier May 14 00:04:36.075427 systemd-networkd[771]: Enumeration completed May 14 00:04:36.075608 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:04:36.076263 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:04:36.076266 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:04:36.076812 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:04:36.076815 systemd-networkd[771]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:04:36.077560 systemd-networkd[771]: eth0: Link UP May 14 00:04:36.077562 systemd-networkd[771]: eth0: Gained carrier May 14 00:04:36.077568 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:04:36.077828 systemd[1]: Reached target network.target - Network. May 14 00:04:36.079803 systemd-networkd[771]: eth1: Link UP May 14 00:04:36.079808 systemd-networkd[771]: eth1: Gained carrier May 14 00:04:36.079821 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:04:36.081987 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 00:04:36.100498 ignition[780]: Ignition 2.20.0 May 14 00:04:36.101491 ignition[780]: Stage: fetch May 14 00:04:36.101664 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 14 00:04:36.101673 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:04:36.101745 ignition[780]: parsed url from cmdline: "" May 14 00:04:36.101748 ignition[780]: no config URL provided May 14 00:04:36.101752 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:04:36.101757 ignition[780]: no config at "/usr/lib/ignition/user.ign" May 14 00:04:36.101779 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 14 00:04:36.101906 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 14 00:04:36.130113 systemd-networkd[771]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:04:36.146204 systemd-networkd[771]: eth0: DHCPv4 address 95.217.191.100/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 00:04:36.302069 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 14 00:04:36.306928 ignition[780]: GET result: OK May 14 00:04:36.307054 ignition[780]: parsing config with SHA512: 6d401e4e58d633abe9f75d7dcfeb0536e81ccbcdab7ca59b1fbc0711646c5d1e924887bc69aa6608dfe60975863074884e923d206ac255cf2780f2f8faeeea86 May 14 00:04:36.314672 unknown[780]: fetched base config from "system" May 14 00:04:36.315758 unknown[780]: fetched base config from "system" May 14 00:04:36.315771 unknown[780]: fetched user config from "hetzner" May 14 00:04:36.316496 ignition[780]: fetch: fetch complete May 14 00:04:36.316504 ignition[780]: fetch: fetch passed May 14 00:04:36.316576 ignition[780]: Ignition finished successfully May 14 00:04:36.319768 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 00:04:36.323195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 00:04:36.356771 ignition[788]: Ignition 2.20.0 May 14 00:04:36.356789 ignition[788]: Stage: kargs May 14 00:04:36.357079 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 14 00:04:36.357095 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:04:36.358763 ignition[788]: kargs: kargs passed May 14 00:04:36.362600 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 00:04:36.358831 ignition[788]: Ignition finished successfully May 14 00:04:36.367243 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 00:04:36.397649 ignition[795]: Ignition 2.20.0 May 14 00:04:36.397666 ignition[795]: Stage: disks May 14 00:04:36.397929 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 14 00:04:36.401142 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 00:04:36.397945 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:04:36.409327 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 00:04:36.399661 ignition[795]: disks: disks passed May 14 00:04:36.410712 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 00:04:36.399727 ignition[795]: Ignition finished successfully May 14 00:04:36.412721 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:04:36.414983 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:04:36.417263 systemd[1]: Reached target basic.target - Basic System. May 14 00:04:36.422208 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 00:04:36.455209 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 14 00:04:36.458445 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 00:04:36.462967 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 00:04:36.585047 kernel: EXT4-fs (sda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 14 00:04:36.585738 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 00:04:36.586935 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 00:04:36.589216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:04:36.591084 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 00:04:36.594111 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 00:04:36.595257 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:04:36.596164 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:04:36.604613 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 00:04:36.608330 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 00:04:36.619055 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (811) May 14 00:04:36.624156 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:04:36.624179 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:04:36.624189 kernel: BTRFS info (device sda6): using free space tree May 14 00:04:36.630746 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 00:04:36.630768 kernel: BTRFS info (device sda6): auto enabling async discard May 14 00:04:36.635638 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:04:36.668871 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:04:36.673386 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory May 14 00:04:36.678043 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:04:36.680456 coreos-metadata[813]: May 14 00:04:36.680 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 14 00:04:36.681458 coreos-metadata[813]: May 14 00:04:36.681 INFO Fetch successful May 14 00:04:36.681971 coreos-metadata[813]: May 14 00:04:36.681 INFO wrote hostname ci-4284-0-0-n-fdde459219 to /sysroot/etc/hostname May 14 00:04:36.683567 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:04:36.684934 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 00:04:36.771745 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 00:04:36.773816 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 00:04:36.776143 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 00:04:36.791070 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:04:36.810928 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 00:04:36.816824 ignition[929]: INFO : Ignition 2.20.0 May 14 00:04:36.816824 ignition[929]: INFO : Stage: mount May 14 00:04:36.819020 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:04:36.819020 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:04:36.819020 ignition[929]: INFO : mount: mount passed May 14 00:04:36.819020 ignition[929]: INFO : Ignition finished successfully May 14 00:04:36.819265 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 00:04:36.823737 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 00:04:36.843964 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 00:04:36.845798 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:04:36.867077 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (940) May 14 00:04:36.871070 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:04:36.871118 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:04:36.873157 kernel: BTRFS info (device sda6): using free space tree May 14 00:04:36.881391 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 00:04:36.881424 kernel: BTRFS info (device sda6): auto enabling async discard May 14 00:04:36.887504 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:04:36.918484 ignition[956]: INFO : Ignition 2.20.0 May 14 00:04:36.918484 ignition[956]: INFO : Stage: files May 14 00:04:36.920283 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:04:36.920283 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:04:36.922526 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 14 00:04:36.922526 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:04:36.922526 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:04:36.927814 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:04:36.927814 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:04:36.927814 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:04:36.927329 unknown[956]: wrote ssh authorized keys file for user: core May 14 00:04:36.933830 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 00:04:36.933830 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 00:04:37.208353 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:04:37.256182 systemd-networkd[771]: eth1: Gained IPv6LL May 14 00:04:37.320148 systemd-networkd[771]: eth0: Gained IPv6LL May 14 00:04:38.793201 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 00:04:38.794603 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:04:38.794603 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 00:04:39.435442 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 00:04:39.515134 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:04:39.515134 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 00:04:39.518736 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 14 00:04:40.090599 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 00:04:40.345423 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 00:04:40.345423 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 00:04:40.347713 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:04:40.347713 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:04:40.347713 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 00:04:40.347713 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 00:04:40.347713 ignition[956]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 00:04:40.347713 ignition[956]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 00:04:40.347713 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 00:04:40.347713 ignition[956]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 14 00:04:40.347713 ignition[956]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:04:40.347713 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:04:40.363980 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:04:40.363980 ignition[956]: INFO : files: files passed May 14 00:04:40.363980 ignition[956]: INFO : Ignition finished successfully May 14 00:04:40.350399 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 00:04:40.357305 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 00:04:40.360995 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 00:04:40.373976 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:04:40.374078 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 00:04:40.378556 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:04:40.378556 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 00:04:40.381044 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:04:40.382391 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:04:40.383926 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 00:04:40.385872 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 00:04:40.444697 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:04:40.444788 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 00:04:40.446275 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 00:04:40.447706 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 00:04:40.449174 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 00:04:40.451352 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 00:04:40.472584 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:04:40.475276 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 00:04:40.500175 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 00:04:40.501265 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:04:40.503502 systemd[1]: Stopped target timers.target - Timer Units. May 14 00:04:40.505564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:04:40.505733 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:04:40.507968 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 00:04:40.509210 systemd[1]: Stopped target basic.target - Basic System. May 14 00:04:40.511282 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 00:04:40.513104 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:04:40.514879 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 00:04:40.516954 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 00:04:40.519060 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:04:40.521196 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 00:04:40.523209 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 00:04:40.525406 systemd[1]: Stopped target swap.target - Swaps. May 14 00:04:40.527426 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:04:40.527614 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 00:04:40.529830 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 00:04:40.531114 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:04:40.532889 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 00:04:40.533612 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:04:40.535042 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:04:40.535203 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 00:04:40.538147 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:04:40.538313 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:04:40.539507 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:04:40.539715 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 00:04:40.541160 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 00:04:40.541303 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 00:04:40.546308 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 00:04:40.548069 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:04:40.548315 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:04:40.553304 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 00:04:40.555698 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:04:40.555977 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:04:40.560753 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:04:40.560967 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:04:40.576524 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:04:40.576654 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 00:04:40.591081 ignition[1011]: INFO : Ignition 2.20.0 May 14 00:04:40.591081 ignition[1011]: INFO : Stage: umount May 14 00:04:40.591081 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:04:40.591081 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:04:40.603558 ignition[1011]: INFO : umount: umount passed May 14 00:04:40.603558 ignition[1011]: INFO : Ignition finished successfully May 14 00:04:40.597385 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:04:40.605482 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:04:40.605627 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 00:04:40.607499 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:04:40.607595 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 00:04:40.608416 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:04:40.608471 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 00:04:40.610989 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 00:04:40.611117 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 00:04:40.625495 systemd[1]: Stopped target network.target - Network. May 14 00:04:40.626987 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:04:40.627089 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:04:40.628837 systemd[1]: Stopped target paths.target - Path Units. May 14 00:04:40.630355 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:04:40.631737 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:04:40.633395 systemd[1]: Stopped target slices.target - Slice Units. May 14 00:04:40.635177 systemd[1]: Stopped target sockets.target - Socket Units. May 14 00:04:40.636984 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:04:40.637056 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:04:40.648791 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:04:40.648848 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:04:40.650640 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:04:40.650710 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 00:04:40.652165 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 00:04:40.652218 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 00:04:40.654320 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 00:04:40.657856 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 00:04:40.662358 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:04:40.662481 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 00:04:40.664215 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:04:40.664317 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 00:04:40.668077 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:04:40.668231 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 00:04:40.673417 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 00:04:40.673703 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:04:40.673832 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 00:04:40.676942 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 00:04:40.678045 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:04:40.678343 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 00:04:40.681167 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 00:04:40.683525 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:04:40.683617 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:04:40.684478 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:04:40.684541 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:04:40.686776 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:04:40.686849 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 00:04:40.688404 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 00:04:40.688472 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:04:40.690188 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:04:40.694190 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:04:40.694288 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 00:04:40.698519 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:04:40.698717 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:04:40.701965 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:04:40.702060 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 00:04:40.704597 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:04:40.704645 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:04:40.706414 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:04:40.706502 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 00:04:40.710626 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:04:40.710705 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 00:04:40.712376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:04:40.712447 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:04:40.716311 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 00:04:40.717474 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 00:04:40.717572 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:04:40.722841 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:04:40.722906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:04:40.726362 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 00:04:40.726450 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 00:04:40.729434 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:04:40.729574 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 00:04:40.737003 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:04:40.737171 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 00:04:40.738952 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 00:04:40.743201 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 00:04:40.763323 systemd[1]: Switching root. May 14 00:04:40.816924 systemd-journald[187]: Journal stopped May 14 00:04:42.142760 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). May 14 00:04:42.142818 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:04:42.142830 kernel: SELinux: policy capability open_perms=1 May 14 00:04:42.142839 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:04:42.142848 kernel: SELinux: policy capability always_check_network=0 May 14 00:04:42.142857 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:04:42.142869 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:04:42.142878 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:04:42.142887 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:04:42.142896 kernel: audit: type=1403 audit(1747181080.965:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:04:42.142907 systemd[1]: Successfully loaded SELinux policy in 58.295ms. May 14 00:04:42.142930 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.094ms. May 14 00:04:42.142943 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:04:42.142954 systemd[1]: Detected virtualization kvm. May 14 00:04:42.142965 systemd[1]: Detected architecture x86-64. May 14 00:04:42.142975 systemd[1]: Detected first boot. May 14 00:04:42.142986 systemd[1]: Hostname set to . May 14 00:04:42.142996 systemd[1]: Initializing machine ID from VM UUID. May 14 00:04:42.143006 zram_generator::config[1056]: No configuration found. May 14 00:04:42.143017 kernel: Guest personality initialized and is inactive May 14 00:04:42.143051 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 00:04:42.143061 kernel: Initialized host personality May 14 00:04:42.143070 kernel: NET: Registered PF_VSOCK protocol family May 14 00:04:42.143082 systemd[1]: Populated /etc with preset unit settings. May 14 00:04:42.143093 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 00:04:42.144522 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:04:42.144540 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 00:04:42.144550 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:04:42.144560 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 00:04:42.144576 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 00:04:42.144586 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 00:04:42.144599 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 00:04:42.144613 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 00:04:42.144623 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 00:04:42.144633 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 00:04:42.144643 systemd[1]: Created slice user.slice - User and Session Slice. May 14 00:04:42.144653 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:04:42.144663 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:04:42.144674 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 00:04:42.144684 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 00:04:42.144696 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 00:04:42.144707 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:04:42.144717 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 00:04:42.144726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:04:42.144736 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 00:04:42.144746 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 00:04:42.144758 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 00:04:42.144768 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 00:04:42.144777 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:04:42.144787 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:04:42.144797 systemd[1]: Reached target slices.target - Slice Units. May 14 00:04:42.144807 systemd[1]: Reached target swap.target - Swaps. May 14 00:04:42.144819 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 00:04:42.144830 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 00:04:42.144841 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 00:04:42.144852 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:04:42.144862 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:04:42.144871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:04:42.144881 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 00:04:42.144891 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 00:04:42.144902 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 00:04:42.144913 systemd[1]: Mounting media.mount - External Media Directory... May 14 00:04:42.144923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:42.144934 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 00:04:42.144944 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 00:04:42.144954 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 00:04:42.144964 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:04:42.144976 systemd[1]: Reached target machines.target - Containers. May 14 00:04:42.144986 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 00:04:42.144998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:04:42.145008 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:04:42.145018 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 00:04:42.145041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:04:42.145064 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:04:42.145074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:04:42.145084 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 00:04:42.145094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:04:42.145105 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:04:42.145116 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:04:42.145127 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 00:04:42.145137 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:04:42.145147 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:04:42.145158 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:04:42.145168 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:04:42.145178 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:04:42.145188 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 00:04:42.145200 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 00:04:42.145210 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 00:04:42.145221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:04:42.145231 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:04:42.145242 systemd[1]: Stopped verity-setup.service. May 14 00:04:42.145254 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:42.145266 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 00:04:42.145276 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 00:04:42.145286 systemd[1]: Mounted media.mount - External Media Directory. May 14 00:04:42.145298 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 00:04:42.145309 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 00:04:42.145319 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 00:04:42.145329 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:04:42.145339 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 00:04:42.145349 kernel: fuse: init (API version 7.39) May 14 00:04:42.145359 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:04:42.145370 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 00:04:42.145380 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:04:42.145390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:04:42.145403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:04:42.145412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:04:42.145422 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:04:42.145432 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 00:04:42.145443 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 00:04:42.145454 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 00:04:42.145464 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 00:04:42.145504 systemd-journald[1140]: Collecting audit messages is disabled. May 14 00:04:42.145529 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:04:42.145540 kernel: ACPI: bus type drm_connector registered May 14 00:04:42.145550 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:04:42.145559 kernel: loop: module loaded May 14 00:04:42.145570 systemd-journald[1140]: Journal started May 14 00:04:42.145593 systemd-journald[1140]: Runtime Journal (/run/log/journal/479fabf3dea0400b8a01cccaba15f131) is 4.7M, max 38.3M, 33.5M free. May 14 00:04:41.719226 systemd[1]: Queued start job for default target multi-user.target. May 14 00:04:41.729883 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 00:04:41.730752 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:04:42.148059 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 00:04:42.154913 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 00:04:42.163254 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 00:04:42.163319 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:04:42.167051 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 00:04:42.174046 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:04:42.183097 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 00:04:42.183176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:04:42.190048 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 00:04:42.203299 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 00:04:42.203339 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:04:42.210063 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:04:42.210331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:04:42.211511 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:04:42.217386 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 00:04:42.218399 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:04:42.219165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:04:42.221384 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 00:04:42.222670 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:04:42.223566 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 00:04:42.225779 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 00:04:42.229382 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 00:04:42.232366 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:04:42.251588 kernel: loop0: detected capacity change from 0 to 109808 May 14 00:04:42.260682 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 00:04:42.263797 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 00:04:42.267207 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 00:04:42.270290 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:04:42.271528 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 00:04:42.286054 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:04:42.298240 systemd-journald[1140]: Time spent on flushing to /var/log/journal/479fabf3dea0400b8a01cccaba15f131 is 16.075ms for 1153 entries. May 14 00:04:42.298240 systemd-journald[1140]: System Journal (/var/log/journal/479fabf3dea0400b8a01cccaba15f131) is 8M, max 584.8M, 576.8M free. May 14 00:04:42.334253 systemd-journald[1140]: Received client request to flush runtime journal. May 14 00:04:42.334292 kernel: loop1: detected capacity change from 0 to 210664 May 14 00:04:42.299288 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 00:04:42.304420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:04:42.306866 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 00:04:42.324109 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 00:04:42.336086 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 00:04:42.355537 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 14 00:04:42.355552 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 14 00:04:42.359941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:04:42.374317 kernel: loop2: detected capacity change from 0 to 151640 May 14 00:04:42.436180 kernel: loop3: detected capacity change from 0 to 8 May 14 00:04:42.458255 kernel: loop4: detected capacity change from 0 to 109808 May 14 00:04:42.487290 kernel: loop5: detected capacity change from 0 to 210664 May 14 00:04:42.521139 kernel: loop6: detected capacity change from 0 to 151640 May 14 00:04:42.552518 kernel: loop7: detected capacity change from 0 to 8 May 14 00:04:42.559157 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 14 00:04:42.559613 (sd-merge)[1205]: Merged extensions into '/usr'. May 14 00:04:42.565797 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... May 14 00:04:42.566985 systemd[1]: Reloading... May 14 00:04:42.650060 zram_generator::config[1230]: No configuration found. May 14 00:04:42.772056 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:04:42.804450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:04:42.873823 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:04:42.874201 systemd[1]: Reloading finished in 306 ms. May 14 00:04:42.888570 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 00:04:42.889372 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 00:04:42.894454 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 00:04:42.906134 systemd[1]: Starting ensure-sysext.service... May 14 00:04:42.910119 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:04:42.926942 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 00:04:42.939014 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 00:04:42.940371 systemd[1]: Reload requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... May 14 00:04:42.940636 systemd[1]: Reloading... May 14 00:04:42.941774 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:04:42.941973 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 00:04:42.942961 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:04:42.943291 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. May 14 00:04:42.943383 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. May 14 00:04:42.946643 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:04:42.946748 systemd-tmpfiles[1278]: Skipping /boot May 14 00:04:42.955781 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:04:42.955896 systemd-tmpfiles[1278]: Skipping /boot May 14 00:04:43.004739 zram_generator::config[1311]: No configuration found. May 14 00:04:43.112057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:04:43.180650 systemd[1]: Reloading finished in 239 ms. May 14 00:04:43.192899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:04:43.209177 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:04:43.212301 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 00:04:43.215556 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 00:04:43.219710 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:04:43.227364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:04:43.233140 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 00:04:43.241017 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:43.241208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:04:43.244083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:04:43.251048 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:04:43.260638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:04:43.262180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:04:43.262294 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:04:43.264301 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 00:04:43.264860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:43.265958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:04:43.267143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:04:43.267949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:04:43.268127 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:04:43.275351 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:43.275594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:04:43.279406 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:04:43.284555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:04:43.287154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:04:43.287327 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:04:43.287475 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:43.293467 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 00:04:43.299746 systemd[1]: Finished ensure-sysext.service. May 14 00:04:43.304305 systemd-udevd[1356]: Using default interface naming scheme 'v255'. May 14 00:04:43.305243 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:43.305399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:04:43.307233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:04:43.308283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:04:43.308326 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:04:43.316123 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 00:04:43.316839 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:43.317173 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 00:04:43.327939 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 00:04:43.328773 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:04:43.329053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:04:43.337858 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:04:43.338082 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:04:43.339420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:04:43.339575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:04:43.340776 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:04:43.340905 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:04:43.344872 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:04:43.344930 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:04:43.358424 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 00:04:43.359570 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:04:43.361241 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 00:04:43.364071 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 00:04:43.365122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:04:43.368147 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:04:43.378737 augenrules[1410]: No rules May 14 00:04:43.376442 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:04:43.376637 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:04:43.438335 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 00:04:43.532156 systemd-networkd[1405]: lo: Link UP May 14 00:04:43.532169 systemd-networkd[1405]: lo: Gained carrier May 14 00:04:43.535491 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 00:04:43.536528 systemd[1]: Reached target time-set.target - System Time Set. May 14 00:04:43.539843 systemd-networkd[1405]: Enumeration completed May 14 00:04:43.540108 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:04:43.541215 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:04:43.541227 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:04:43.543854 systemd-networkd[1405]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:04:43.543866 systemd-networkd[1405]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:04:43.544415 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 00:04:43.546582 systemd-resolved[1355]: Positive Trust Anchors: May 14 00:04:43.546851 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:04:43.546922 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:04:43.549310 systemd-networkd[1405]: eth0: Link UP May 14 00:04:43.549321 systemd-networkd[1405]: eth0: Gained carrier May 14 00:04:43.549337 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:04:43.551099 systemd-timesyncd[1377]: No network connectivity, watching for changes. May 14 00:04:43.551420 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 00:04:43.552872 systemd-resolved[1355]: Using system hostname 'ci-4284-0-0-n-fdde459219'. May 14 00:04:43.559117 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1424) May 14 00:04:43.556364 systemd-networkd[1405]: eth1: Link UP May 14 00:04:43.556368 systemd-networkd[1405]: eth1: Gained carrier May 14 00:04:43.556385 systemd-networkd[1405]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:04:43.556890 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:04:43.558090 systemd[1]: Reached target network.target - Network. May 14 00:04:43.558581 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:04:43.588097 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 00:04:43.594103 systemd-networkd[1405]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:04:43.594976 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. May 14 00:04:43.603650 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 00:04:43.608198 kernel: ACPI: button: Power Button [PWRF] May 14 00:04:43.621080 systemd-networkd[1405]: eth0: DHCPv4 address 95.217.191.100/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 00:04:43.622276 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. May 14 00:04:43.631056 kernel: mousedev: PS/2 mouse device common for all mice May 14 00:04:43.633611 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 14 00:04:43.633775 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:43.633925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:04:43.635458 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:04:43.641248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:04:43.645148 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:04:43.645693 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:04:43.645719 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:04:43.645741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:04:43.645753 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:04:43.657286 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:04:43.657597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:04:43.659437 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:04:43.661709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:04:43.661849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:04:43.662726 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:04:43.662844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:04:43.663365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:04:43.667763 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 00:04:43.669469 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 00:04:43.681372 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 00:04:43.683714 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 00:04:43.683844 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 00:04:43.695489 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 14 00:04:43.698892 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 00:04:43.701302 kernel: EDAC MC: Ver: 3.0.0 May 14 00:04:43.724349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:04:43.728069 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 14 00:04:43.728119 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 14 00:04:43.734809 kernel: Console: switching to colour dummy device 80x25 May 14 00:04:43.735049 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 00:04:43.735072 kernel: [drm] features: -context_init May 14 00:04:43.738315 kernel: [drm] number of scanouts: 1 May 14 00:04:43.738394 kernel: [drm] number of cap sets: 0 May 14 00:04:43.741057 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 14 00:04:43.751743 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 14 00:04:43.751802 kernel: Console: switching to colour frame buffer device 160x50 May 14 00:04:43.759598 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 14 00:04:43.765800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:04:43.766093 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:04:43.775327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:04:43.839696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:04:43.885771 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 00:04:43.889005 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 00:04:43.915688 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:04:43.953605 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 00:04:43.955421 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:04:43.955590 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:04:43.955844 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 00:04:43.955996 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 00:04:43.956409 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 00:04:43.956699 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 00:04:43.956820 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 00:04:43.956928 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:04:43.956983 systemd[1]: Reached target paths.target - Path Units. May 14 00:04:43.958608 systemd[1]: Reached target timers.target - Timer Units. May 14 00:04:43.960682 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 00:04:43.963589 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 00:04:43.969667 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 00:04:43.971315 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 00:04:43.971460 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 00:04:43.984275 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 00:04:43.985075 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 00:04:43.987270 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 00:04:43.991050 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 00:04:43.995380 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:04:43.997775 systemd[1]: Reached target basic.target - Basic System. May 14 00:04:44.001900 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 00:04:44.001965 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 00:04:44.004718 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:04:44.005153 systemd[1]: Starting containerd.service - containerd container runtime... May 14 00:04:44.011338 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 00:04:44.018828 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 00:04:44.032414 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 00:04:44.037339 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 00:04:44.038963 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 00:04:44.048279 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 00:04:44.055602 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 00:04:44.063402 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 14 00:04:44.066057 jq[1482]: false May 14 00:04:44.073008 coreos-metadata[1480]: May 14 00:04:44.072 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 14 00:04:44.088701 coreos-metadata[1480]: May 14 00:04:44.086 INFO Fetch successful May 14 00:04:44.088701 coreos-metadata[1480]: May 14 00:04:44.087 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 14 00:04:44.088701 coreos-metadata[1480]: May 14 00:04:44.087 INFO Fetch successful May 14 00:04:44.074885 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 00:04:44.082372 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 00:04:44.096264 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 00:04:44.099406 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:04:44.099862 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:04:44.103865 systemd[1]: Starting update-engine.service - Update Engine... May 14 00:04:44.111160 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 00:04:44.114937 dbus-daemon[1481]: [system] SELinux support is enabled May 14 00:04:44.115552 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 00:04:44.121802 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 00:04:44.128374 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:04:44.128656 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 00:04:44.129240 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:04:44.129380 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 00:04:44.147945 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:04:44.148205 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 00:04:44.165181 extend-filesystems[1485]: Found loop4 May 14 00:04:44.165181 extend-filesystems[1485]: Found loop5 May 14 00:04:44.165181 extend-filesystems[1485]: Found loop6 May 14 00:04:44.165181 extend-filesystems[1485]: Found loop7 May 14 00:04:44.165181 extend-filesystems[1485]: Found sda May 14 00:04:44.165181 extend-filesystems[1485]: Found sda1 May 14 00:04:44.165181 extend-filesystems[1485]: Found sda2 May 14 00:04:44.165181 extend-filesystems[1485]: Found sda3 May 14 00:04:44.165181 extend-filesystems[1485]: Found usr May 14 00:04:44.165181 extend-filesystems[1485]: Found sda4 May 14 00:04:44.165181 extend-filesystems[1485]: Found sda6 May 14 00:04:44.165181 extend-filesystems[1485]: Found sda7 May 14 00:04:44.165181 extend-filesystems[1485]: Found sda9 May 14 00:04:44.165181 extend-filesystems[1485]: Checking size of /dev/sda9 May 14 00:04:44.214551 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 14 00:04:44.155998 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:04:44.224552 extend-filesystems[1485]: Resized partition /dev/sda9 May 14 00:04:44.225801 update_engine[1499]: I20250514 00:04:44.208070 1499 main.cc:92] Flatcar Update Engine starting May 14 00:04:44.225801 update_engine[1499]: I20250514 00:04:44.223872 1499 update_check_scheduler.cc:74] Next update check in 8m6s May 14 00:04:44.225973 jq[1502]: true May 14 00:04:44.156043 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 00:04:44.229766 extend-filesystems[1521]: resize2fs 1.47.2 (1-Jan-2025) May 14 00:04:44.158889 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:04:44.233637 tar[1507]: linux-amd64/helm May 14 00:04:44.158903 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 00:04:44.216531 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 00:04:44.221894 systemd[1]: Started update-engine.service - Update Engine. May 14 00:04:44.240122 jq[1522]: true May 14 00:04:44.244161 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 00:04:44.295546 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1409) May 14 00:04:44.319986 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 14 00:04:44.331468 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 00:04:44.346049 extend-filesystems[1521]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 14 00:04:44.346049 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 5 May 14 00:04:44.346049 extend-filesystems[1521]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 14 00:04:44.361239 extend-filesystems[1485]: Resized filesystem in /dev/sda9 May 14 00:04:44.361239 extend-filesystems[1485]: Found sr0 May 14 00:04:44.353043 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 00:04:44.354991 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:04:44.355222 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 00:04:44.389050 bash[1550]: Updated "/home/core/.ssh/authorized_keys" May 14 00:04:44.385338 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 00:04:44.389230 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:04:44.394996 systemd[1]: Starting sshkeys.service... May 14 00:04:44.414723 systemd-logind[1497]: New seat seat0. May 14 00:04:44.418469 systemd-logind[1497]: Watching system buttons on /dev/input/event2 (Power Button) May 14 00:04:44.418485 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 00:04:44.423558 systemd[1]: Started systemd-logind.service - User Login Management. May 14 00:04:44.436100 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 00:04:44.442241 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 00:04:44.471800 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 00:04:44.484310 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 00:04:44.510664 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:04:44.510822 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 00:04:44.517485 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 00:04:44.525634 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:04:44.527465 coreos-metadata[1563]: May 14 00:04:44.526 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 14 00:04:44.533112 coreos-metadata[1563]: May 14 00:04:44.533 INFO Fetch successful May 14 00:04:44.534312 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 00:04:44.539101 unknown[1563]: wrote ssh authorized keys file for user: core May 14 00:04:44.540677 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 00:04:44.549345 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 00:04:44.553594 systemd[1]: Reached target getty.target - Login Prompts. May 14 00:04:44.601068 update-ssh-keys[1587]: Updated "/home/core/.ssh/authorized_keys" May 14 00:04:44.602721 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 00:04:44.609403 systemd[1]: Finished sshkeys.service. May 14 00:04:44.623191 containerd[1513]: time="2025-05-14T00:04:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 00:04:44.623714 containerd[1513]: time="2025-05-14T00:04:44.623652081Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.635642911Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.195µs" May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.635719745Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.635739892Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.635922595Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.635938616Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.635961388Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.636061135Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.636072877Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.636371297Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.636384642Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.636394059Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:04:44.637049 containerd[1513]: time="2025-05-14T00:04:44.636401624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 00:04:44.637300 containerd[1513]: time="2025-05-14T00:04:44.636755557Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 00:04:44.637300 containerd[1513]: time="2025-05-14T00:04:44.636964369Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:04:44.637300 containerd[1513]: time="2025-05-14T00:04:44.637051944Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:04:44.637300 containerd[1513]: time="2025-05-14T00:04:44.637064267Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 00:04:44.637300 containerd[1513]: time="2025-05-14T00:04:44.637097389Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 00:04:44.637383 containerd[1513]: time="2025-05-14T00:04:44.637311450Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 00:04:44.637383 containerd[1513]: time="2025-05-14T00:04:44.637356816Z" level=info msg="metadata content store policy set" policy=shared May 14 00:04:44.642074 containerd[1513]: time="2025-05-14T00:04:44.642037865Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 00:04:44.642304 containerd[1513]: time="2025-05-14T00:04:44.642283345Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 00:04:44.642338 containerd[1513]: time="2025-05-14T00:04:44.642313241Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642534046Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642569573Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642585572Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642601682Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642613986Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642646406Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642659731Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642672135Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642686572Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642793973Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642815935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642826945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642839569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 00:04:44.643049 containerd[1513]: time="2025-05-14T00:04:44.642851822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 00:04:44.643285 containerd[1513]: time="2025-05-14T00:04:44.642862732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 00:04:44.643285 containerd[1513]: time="2025-05-14T00:04:44.642876438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 00:04:44.643285 containerd[1513]: time="2025-05-14T00:04:44.642889282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 00:04:44.643285 containerd[1513]: time="2025-05-14T00:04:44.642902587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 00:04:44.643285 containerd[1513]: time="2025-05-14T00:04:44.642915551Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 00:04:44.643285 containerd[1513]: time="2025-05-14T00:04:44.642925981Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 00:04:44.643285 containerd[1513]: time="2025-05-14T00:04:44.642993277Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 00:04:44.643285 containerd[1513]: time="2025-05-14T00:04:44.643008646Z" level=info msg="Start snapshots syncer" May 14 00:04:44.645217 containerd[1513]: time="2025-05-14T00:04:44.645102403Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 00:04:44.645592 containerd[1513]: time="2025-05-14T00:04:44.645553799Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648186597Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648266868Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648376273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648395440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648408494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648417430Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648428001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648437127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648446736Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648466723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648476932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648485188Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648530703Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:04:44.649199 containerd[1513]: time="2025-05-14T00:04:44.648542736Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648550430Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648558455Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648567863Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648613999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648624920Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648639507Z" level=info msg="runtime interface created" May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648644436Z" level=info msg="created NRI interface" May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648651790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648662420Z" level=info msg="Connect containerd service" May 14 00:04:44.649439 containerd[1513]: time="2025-05-14T00:04:44.648682017Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 00:04:44.649970 containerd[1513]: time="2025-05-14T00:04:44.649741043Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:04:44.740872 containerd[1513]: time="2025-05-14T00:04:44.740781399Z" level=info msg="Start subscribing containerd event" May 14 00:04:44.741003 containerd[1513]: time="2025-05-14T00:04:44.740981223Z" level=info msg="Start recovering state" May 14 00:04:44.741113 containerd[1513]: time="2025-05-14T00:04:44.741103232Z" level=info msg="Start event monitor" May 14 00:04:44.741158 containerd[1513]: time="2025-05-14T00:04:44.741150411Z" level=info msg="Start cni network conf syncer for default" May 14 00:04:44.741308 containerd[1513]: time="2025-05-14T00:04:44.741295513Z" level=info msg="Start streaming server" May 14 00:04:44.741366 containerd[1513]: time="2025-05-14T00:04:44.741357509Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 00:04:44.741402 containerd[1513]: time="2025-05-14T00:04:44.741395531Z" level=info msg="runtime interface starting up..." May 14 00:04:44.741434 containerd[1513]: time="2025-05-14T00:04:44.741427701Z" level=info msg="starting plugins..." May 14 00:04:44.741475 containerd[1513]: time="2025-05-14T00:04:44.741466854Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 00:04:44.741576 containerd[1513]: time="2025-05-14T00:04:44.741259004Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:04:44.741646 containerd[1513]: time="2025-05-14T00:04:44.741636162Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:04:44.741806 systemd[1]: Started containerd.service - containerd container runtime. May 14 00:04:44.743551 containerd[1513]: time="2025-05-14T00:04:44.743458760Z" level=info msg="containerd successfully booted in 0.120769s" May 14 00:04:44.812405 tar[1507]: linux-amd64/LICENSE May 14 00:04:44.812499 tar[1507]: linux-amd64/README.md May 14 00:04:44.843425 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 00:04:44.936347 systemd-networkd[1405]: eth1: Gained IPv6LL May 14 00:04:44.937265 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. May 14 00:04:44.940315 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 00:04:44.943557 systemd[1]: Reached target network-online.target - Network is Online. May 14 00:04:44.948860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:04:44.962400 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 00:04:45.003106 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 00:04:45.258017 systemd-networkd[1405]: eth0: Gained IPv6LL May 14 00:04:45.258710 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. May 14 00:04:46.243751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:04:46.245643 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 00:04:46.251975 systemd[1]: Startup finished in 1.619s (kernel) + 8.168s (initrd) + 5.342s (userspace) = 15.130s. May 14 00:04:46.258486 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:04:47.262319 kubelet[1624]: E0514 00:04:47.262213 1624 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:04:47.266326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:04:47.266583 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:04:47.267281 systemd[1]: kubelet.service: Consumed 1.698s CPU time, 242M memory peak. May 14 00:04:57.420460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:04:57.423434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:04:57.579216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:04:57.596635 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:04:57.679072 kubelet[1644]: E0514 00:04:57.678786 1644 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:04:57.684925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:04:57.685203 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:04:57.685855 systemd[1]: kubelet.service: Consumed 221ms CPU time, 98.4M memory peak. May 14 00:05:07.920292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:05:07.922724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:05:08.077558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:05:08.089344 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:05:08.125165 kubelet[1660]: E0514 00:05:08.125081 1660 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:05:08.128577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:05:08.128716 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:05:08.128991 systemd[1]: kubelet.service: Consumed 168ms CPU time, 96M memory peak. May 14 00:05:11.996427 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 00:05:11.998336 systemd[1]: Started sshd@0-95.217.191.100:22-103.232.80.5:59682.service - OpenSSH per-connection server daemon (103.232.80.5:59682). May 14 00:05:12.497619 sshd[1669]: Connection closed by 103.232.80.5 port 59682 [preauth] May 14 00:05:12.498938 systemd[1]: sshd@0-95.217.191.100:22-103.232.80.5:59682.service: Deactivated successfully. May 14 00:05:15.678257 systemd-timesyncd[1377]: Contacted time server 217.144.138.234:123 (2.flatcar.pool.ntp.org). May 14 00:05:15.678389 systemd-timesyncd[1377]: Initial clock synchronization to Wed 2025-05-14 00:05:15.954089 UTC. May 14 00:05:18.170794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 00:05:18.173705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:05:18.286102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:05:18.295582 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:05:18.362719 kubelet[1681]: E0514 00:05:18.362623 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:05:18.366914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:05:18.367143 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:05:18.367632 systemd[1]: kubelet.service: Consumed 164ms CPU time, 95.7M memory peak. May 14 00:05:28.420810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 00:05:28.423786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:05:28.573485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:05:28.579327 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:05:28.630923 kubelet[1697]: E0514 00:05:28.630850 1697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:05:28.634068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:05:28.634243 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:05:28.634562 systemd[1]: kubelet.service: Consumed 184ms CPU time, 95.9M memory peak. May 14 00:05:29.165562 update_engine[1499]: I20250514 00:05:29.165462 1499 update_attempter.cc:509] Updating boot flags... May 14 00:05:29.216356 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1714) May 14 00:05:29.290119 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1713) May 14 00:05:38.670546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 00:05:38.673275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:05:38.823395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:05:38.833477 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:05:38.884511 kubelet[1731]: E0514 00:05:38.884429 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:05:38.887373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:05:38.887564 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:05:38.888112 systemd[1]: kubelet.service: Consumed 179ms CPU time, 93.9M memory peak. May 14 00:05:48.920446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 00:05:48.923619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:05:49.086545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:05:49.089817 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:05:49.135648 kubelet[1748]: E0514 00:05:49.135569 1748 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:05:49.139173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:05:49.139384 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:05:49.139731 systemd[1]: kubelet.service: Consumed 181ms CPU time, 94.1M memory peak. May 14 00:05:59.170462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 14 00:05:59.173559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:05:59.334151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:05:59.341388 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:05:59.398488 kubelet[1765]: E0514 00:05:59.398408 1765 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:05:59.402443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:05:59.402647 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:05:59.403076 systemd[1]: kubelet.service: Consumed 195ms CPU time, 95.9M memory peak. May 14 00:06:09.420374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 14 00:06:09.422887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:06:09.580282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:06:09.590427 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:06:09.666902 kubelet[1781]: E0514 00:06:09.666492 1781 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:06:09.670176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:06:09.670479 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:06:09.671108 systemd[1]: kubelet.service: Consumed 212ms CPU time, 93.8M memory peak. May 14 00:06:19.920311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 14 00:06:19.922660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:06:20.090606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:06:20.098326 (kubelet)[1797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:06:20.159471 kubelet[1797]: E0514 00:06:20.159407 1797 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:06:20.162607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:06:20.162847 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:06:20.163525 systemd[1]: kubelet.service: Consumed 203ms CPU time, 97.5M memory peak. May 14 00:06:30.170556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 14 00:06:30.172939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:06:30.318879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:06:30.330492 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:06:30.392167 kubelet[1813]: E0514 00:06:30.392018 1813 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:06:30.395291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:06:30.395459 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:06:30.396037 systemd[1]: kubelet.service: Consumed 179ms CPU time, 94.3M memory peak. May 14 00:06:40.420368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 14 00:06:40.422874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:06:40.595952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:06:40.605438 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:06:40.681345 kubelet[1829]: E0514 00:06:40.681157 1829 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:06:40.685456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:06:40.685634 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:06:40.685970 systemd[1]: kubelet.service: Consumed 224ms CPU time, 97.8M memory peak. May 14 00:06:50.920107 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 14 00:06:50.921965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:06:51.086100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:06:51.091493 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:06:51.151261 kubelet[1845]: E0514 00:06:51.151155 1845 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:06:51.154259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:06:51.154445 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:06:51.154792 systemd[1]: kubelet.service: Consumed 194ms CPU time, 95.8M memory peak. May 14 00:07:01.170345 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 14 00:07:01.172882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:07:01.348418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:07:01.363431 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:07:01.436070 kubelet[1862]: E0514 00:07:01.435872 1862 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:07:01.439659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:07:01.439850 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:07:01.440237 systemd[1]: kubelet.service: Consumed 218ms CPU time, 96.2M memory peak. May 14 00:07:11.670481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. May 14 00:07:11.673247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:07:11.813474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:07:11.823230 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:07:11.856004 kubelet[1878]: E0514 00:07:11.855934 1878 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:07:11.859161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:07:11.859276 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:07:11.859556 systemd[1]: kubelet.service: Consumed 160ms CPU time, 96.3M memory peak. May 14 00:07:21.919881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. May 14 00:07:21.921620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:07:22.095868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:07:22.107378 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:07:22.163598 kubelet[1894]: E0514 00:07:22.163503 1894 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:07:22.167530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:07:22.167762 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:07:22.168302 systemd[1]: kubelet.service: Consumed 212ms CPU time, 95.7M memory peak. May 14 00:07:32.170297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. May 14 00:07:32.172501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:07:32.304892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:07:32.308198 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:07:32.370834 kubelet[1910]: E0514 00:07:32.370679 1910 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:07:32.373367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:07:32.373589 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:07:32.373997 systemd[1]: kubelet.service: Consumed 167ms CPU time, 97.7M memory peak. May 14 00:07:42.419907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. May 14 00:07:42.421717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:07:42.585926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:07:42.592450 (kubelet)[1926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:07:42.643120 kubelet[1926]: E0514 00:07:42.642959 1926 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:07:42.646862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:07:42.646995 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:07:42.647311 systemd[1]: kubelet.service: Consumed 204ms CPU time, 97.3M memory peak. May 14 00:07:52.670275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. May 14 00:07:52.672631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:07:52.811846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:07:52.820384 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:07:52.858426 kubelet[1943]: E0514 00:07:52.858367 1943 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:07:52.861007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:07:52.861291 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:07:52.861550 systemd[1]: kubelet.service: Consumed 144ms CPU time, 97.8M memory peak. May 14 00:08:02.920747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. May 14 00:08:02.923554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:03.098282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:03.106477 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:03.156013 kubelet[1959]: E0514 00:08:03.155889 1959 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:03.158807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:03.159102 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:03.159554 systemd[1]: kubelet.service: Consumed 205ms CPU time, 96.1M memory peak. May 14 00:08:13.170474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. May 14 00:08:13.173116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:13.321734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:13.333231 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:13.363423 kubelet[1975]: E0514 00:08:13.363353 1975 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:13.366511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:13.366629 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:13.367195 systemd[1]: kubelet.service: Consumed 156ms CPU time, 97.7M memory peak. May 14 00:08:23.420391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. May 14 00:08:23.422832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:23.564466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:23.575657 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:23.643927 kubelet[1991]: E0514 00:08:23.643841 1991 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:23.647773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:23.647980 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:23.648772 systemd[1]: kubelet.service: Consumed 183ms CPU time, 97.5M memory peak. May 14 00:08:33.669886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. May 14 00:08:33.671850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:33.868285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:33.887494 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:33.938669 kubelet[2008]: E0514 00:08:33.938541 2008 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:33.941994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:33.942253 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:33.942686 systemd[1]: kubelet.service: Consumed 205ms CPU time, 97.5M memory peak. May 14 00:08:44.170506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. May 14 00:08:44.172903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:44.344155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:44.353377 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:44.415378 kubelet[2024]: E0514 00:08:44.415216 2024 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:44.419275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:44.419622 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:44.420419 systemd[1]: kubelet.service: Consumed 204ms CPU time, 96M memory peak. May 14 00:08:45.161540 systemd[1]: Started sshd@1-95.217.191.100:22-139.178.89.65:41948.service - OpenSSH per-connection server daemon (139.178.89.65:41948). May 14 00:08:46.164877 sshd[2032]: Accepted publickey for core from 139.178.89.65 port 41948 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:08:46.169746 sshd-session[2032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:08:46.189801 systemd-logind[1497]: New session 1 of user core. May 14 00:08:46.191891 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 00:08:46.193758 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 00:08:46.240652 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 00:08:46.246614 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 00:08:46.269164 (systemd)[2036]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:08:46.274320 systemd-logind[1497]: New session c1 of user core. May 14 00:08:46.501227 systemd[2036]: Queued start job for default target default.target. May 14 00:08:46.507982 systemd[2036]: Created slice app.slice - User Application Slice. May 14 00:08:46.508011 systemd[2036]: Reached target paths.target - Paths. May 14 00:08:46.508071 systemd[2036]: Reached target timers.target - Timers. May 14 00:08:46.509349 systemd[2036]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 00:08:46.536362 systemd[2036]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 00:08:46.536512 systemd[2036]: Reached target sockets.target - Sockets. May 14 00:08:46.536590 systemd[2036]: Reached target basic.target - Basic System. May 14 00:08:46.536637 systemd[2036]: Reached target default.target - Main User Target. May 14 00:08:46.536675 systemd[2036]: Startup finished in 241ms. May 14 00:08:46.537287 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 00:08:46.546287 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 00:08:47.246282 systemd[1]: Started sshd@2-95.217.191.100:22-139.178.89.65:48868.service - OpenSSH per-connection server daemon (139.178.89.65:48868). May 14 00:08:48.255521 sshd[2047]: Accepted publickey for core from 139.178.89.65 port 48868 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:08:48.257488 sshd-session[2047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:08:48.264126 systemd-logind[1497]: New session 2 of user core. May 14 00:08:48.272236 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 00:08:48.936116 sshd[2049]: Connection closed by 139.178.89.65 port 48868 May 14 00:08:48.936914 sshd-session[2047]: pam_unix(sshd:session): session closed for user core May 14 00:08:48.940081 systemd[1]: sshd@2-95.217.191.100:22-139.178.89.65:48868.service: Deactivated successfully. May 14 00:08:48.941787 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:08:48.943312 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. May 14 00:08:48.944647 systemd-logind[1497]: Removed session 2. May 14 00:08:49.108660 systemd[1]: Started sshd@3-95.217.191.100:22-139.178.89.65:48884.service - OpenSSH per-connection server daemon (139.178.89.65:48884). May 14 00:08:50.104785 sshd[2055]: Accepted publickey for core from 139.178.89.65 port 48884 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:08:50.106537 sshd-session[2055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:08:50.112737 systemd-logind[1497]: New session 3 of user core. May 14 00:08:50.123281 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 00:08:50.776058 sshd[2057]: Connection closed by 139.178.89.65 port 48884 May 14 00:08:50.776702 sshd-session[2055]: pam_unix(sshd:session): session closed for user core May 14 00:08:50.780724 systemd[1]: sshd@3-95.217.191.100:22-139.178.89.65:48884.service: Deactivated successfully. May 14 00:08:50.783270 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:08:50.784501 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. May 14 00:08:50.785866 systemd-logind[1497]: Removed session 3. May 14 00:08:50.948601 systemd[1]: Started sshd@4-95.217.191.100:22-139.178.89.65:48900.service - OpenSSH per-connection server daemon (139.178.89.65:48900). May 14 00:08:51.949277 sshd[2063]: Accepted publickey for core from 139.178.89.65 port 48900 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:08:51.951559 sshd-session[2063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:08:51.959528 systemd-logind[1497]: New session 4 of user core. May 14 00:08:51.968308 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 00:08:52.628616 sshd[2065]: Connection closed by 139.178.89.65 port 48900 May 14 00:08:52.629675 sshd-session[2063]: pam_unix(sshd:session): session closed for user core May 14 00:08:52.634256 systemd[1]: sshd@4-95.217.191.100:22-139.178.89.65:48900.service: Deactivated successfully. May 14 00:08:52.637565 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:08:52.640267 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. May 14 00:08:52.642292 systemd-logind[1497]: Removed session 4. May 14 00:08:52.805721 systemd[1]: Started sshd@5-95.217.191.100:22-139.178.89.65:48914.service - OpenSSH per-connection server daemon (139.178.89.65:48914). May 14 00:08:53.820299 sshd[2071]: Accepted publickey for core from 139.178.89.65 port 48914 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:08:53.822541 sshd-session[2071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:08:53.832012 systemd-logind[1497]: New session 5 of user core. May 14 00:08:53.838325 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 00:08:54.355214 sudo[2074]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 00:08:54.355649 sudo[2074]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:08:54.371437 sudo[2074]: pam_unix(sudo:session): session closed for user root May 14 00:08:54.530439 sshd[2073]: Connection closed by 139.178.89.65 port 48914 May 14 00:08:54.531616 sshd-session[2071]: pam_unix(sshd:session): session closed for user core May 14 00:08:54.536545 systemd[1]: sshd@5-95.217.191.100:22-139.178.89.65:48914.service: Deactivated successfully. May 14 00:08:54.540256 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:08:54.542271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. May 14 00:08:54.545243 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. May 14 00:08:54.547003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:54.548559 systemd-logind[1497]: Removed session 5. May 14 00:08:54.703259 systemd[1]: Started sshd@6-95.217.191.100:22-139.178.89.65:48926.service - OpenSSH per-connection server daemon (139.178.89.65:48926). May 14 00:08:54.717828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:54.730360 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:54.769432 kubelet[2089]: E0514 00:08:54.769367 2089 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:54.772245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:54.772364 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:54.772593 systemd[1]: kubelet.service: Consumed 176ms CPU time, 95.8M memory peak. May 14 00:08:55.690663 sshd[2085]: Accepted publickey for core from 139.178.89.65 port 48926 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:08:55.692976 sshd-session[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:08:55.699359 systemd-logind[1497]: New session 6 of user core. May 14 00:08:55.710272 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 00:08:56.215876 sudo[2101]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 00:08:56.216377 sudo[2101]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:08:56.222087 sudo[2101]: pam_unix(sudo:session): session closed for user root May 14 00:08:56.231414 sudo[2100]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 00:08:56.231919 sudo[2100]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:08:56.247750 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:08:56.299597 augenrules[2123]: No rules May 14 00:08:56.300574 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:08:56.300880 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:08:56.302392 sudo[2100]: pam_unix(sudo:session): session closed for user root May 14 00:08:56.461477 sshd[2099]: Connection closed by 139.178.89.65 port 48926 May 14 00:08:56.462281 sshd-session[2085]: pam_unix(sshd:session): session closed for user core May 14 00:08:56.469307 systemd[1]: sshd@6-95.217.191.100:22-139.178.89.65:48926.service: Deactivated successfully. May 14 00:08:56.471987 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:08:56.472776 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. May 14 00:08:56.473748 systemd-logind[1497]: Removed session 6. May 14 00:08:56.633523 systemd[1]: Started sshd@7-95.217.191.100:22-139.178.89.65:48930.service - OpenSSH per-connection server daemon (139.178.89.65:48930). May 14 00:08:57.640659 sshd[2132]: Accepted publickey for core from 139.178.89.65 port 48930 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:08:57.642649 sshd-session[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:08:57.652608 systemd-logind[1497]: New session 7 of user core. May 14 00:08:57.670911 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 00:08:58.164156 sudo[2135]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:08:58.164596 sudo[2135]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:08:58.741475 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 00:08:58.757603 (dockerd)[2152]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 00:08:59.157822 dockerd[2152]: time="2025-05-14T00:08:59.157635814Z" level=info msg="Starting up" May 14 00:08:59.159488 dockerd[2152]: time="2025-05-14T00:08:59.159442742Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 00:08:59.241137 dockerd[2152]: time="2025-05-14T00:08:59.240823681Z" level=info msg="Loading containers: start." May 14 00:08:59.457089 kernel: Initializing XFRM netlink socket May 14 00:08:59.558872 systemd-networkd[1405]: docker0: Link UP May 14 00:08:59.627144 dockerd[2152]: time="2025-05-14T00:08:59.627066059Z" level=info msg="Loading containers: done." May 14 00:08:59.649220 dockerd[2152]: time="2025-05-14T00:08:59.649128357Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:08:59.649459 dockerd[2152]: time="2025-05-14T00:08:59.649253016Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 00:08:59.649459 dockerd[2152]: time="2025-05-14T00:08:59.649388757Z" level=info msg="Daemon has completed initialization" May 14 00:08:59.701403 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 00:08:59.701747 dockerd[2152]: time="2025-05-14T00:08:59.701591464Z" level=info msg="API listen on /run/docker.sock" May 14 00:09:01.146670 containerd[1513]: time="2025-05-14T00:09:01.146612342Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 00:09:01.805408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011374805.mount: Deactivated successfully. May 14 00:09:03.629682 containerd[1513]: time="2025-05-14T00:09:03.629613870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:03.631170 containerd[1513]: time="2025-05-14T00:09:03.631127105Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674967" May 14 00:09:03.632447 containerd[1513]: time="2025-05-14T00:09:03.632413199Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:03.634547 containerd[1513]: time="2025-05-14T00:09:03.634515569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:03.635366 containerd[1513]: time="2025-05-14T00:09:03.635218409Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.488554174s" May 14 00:09:03.635366 containerd[1513]: time="2025-05-14T00:09:03.635245358Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 14 00:09:03.648310 containerd[1513]: time="2025-05-14T00:09:03.648278185Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 00:09:04.920232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. May 14 00:09:04.922845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:05.079320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:05.081955 (kubelet)[2427]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:09:05.114575 kubelet[2427]: E0514 00:09:05.114460 2427 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:09:05.117982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:09:05.118154 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:09:05.118524 systemd[1]: kubelet.service: Consumed 163ms CPU time, 95.1M memory peak. May 14 00:09:05.932095 containerd[1513]: time="2025-05-14T00:09:05.931999948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:05.933245 containerd[1513]: time="2025-05-14T00:09:05.933190965Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617556" May 14 00:09:05.935656 containerd[1513]: time="2025-05-14T00:09:05.935583106Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:05.939671 containerd[1513]: time="2025-05-14T00:09:05.939632177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:05.941126 containerd[1513]: time="2025-05-14T00:09:05.940922062Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.292607552s" May 14 00:09:05.941126 containerd[1513]: time="2025-05-14T00:09:05.940978404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 14 00:09:05.961124 containerd[1513]: time="2025-05-14T00:09:05.961080511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 00:09:07.786650 containerd[1513]: time="2025-05-14T00:09:07.786591112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:07.787741 containerd[1513]: time="2025-05-14T00:09:07.787697899Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903704" May 14 00:09:07.791214 containerd[1513]: time="2025-05-14T00:09:07.790401914Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:07.793456 containerd[1513]: time="2025-05-14T00:09:07.793428213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:07.793891 containerd[1513]: time="2025-05-14T00:09:07.793873851Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.832761872s" May 14 00:09:07.794006 containerd[1513]: time="2025-05-14T00:09:07.793937175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 14 00:09:07.807974 containerd[1513]: time="2025-05-14T00:09:07.807869263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 00:09:08.960505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529693851.mount: Deactivated successfully. May 14 00:09:09.381525 containerd[1513]: time="2025-05-14T00:09:09.381363741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:09.382603 containerd[1513]: time="2025-05-14T00:09:09.382546300Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185845" May 14 00:09:09.383640 containerd[1513]: time="2025-05-14T00:09:09.383597278Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:09.385500 containerd[1513]: time="2025-05-14T00:09:09.385450625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:09.386055 containerd[1513]: time="2025-05-14T00:09:09.385833290Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.577868644s" May 14 00:09:09.386055 containerd[1513]: time="2025-05-14T00:09:09.385877962Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 14 00:09:09.401110 containerd[1513]: time="2025-05-14T00:09:09.401059390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:09:09.997080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount990113139.mount: Deactivated successfully. May 14 00:09:10.926143 containerd[1513]: time="2025-05-14T00:09:10.926042502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:10.928333 containerd[1513]: time="2025-05-14T00:09:10.928229568Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" May 14 00:09:10.929329 containerd[1513]: time="2025-05-14T00:09:10.929274758Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:10.932552 containerd[1513]: time="2025-05-14T00:09:10.932453397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:10.934038 containerd[1513]: time="2025-05-14T00:09:10.933889417Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.53206095s" May 14 00:09:10.934038 containerd[1513]: time="2025-05-14T00:09:10.933932366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 00:09:10.961685 containerd[1513]: time="2025-05-14T00:09:10.961629620Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 00:09:11.488059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2787412828.mount: Deactivated successfully. May 14 00:09:11.498461 containerd[1513]: time="2025-05-14T00:09:11.498368949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:11.499715 containerd[1513]: time="2025-05-14T00:09:11.499589619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322312" May 14 00:09:11.501269 containerd[1513]: time="2025-05-14T00:09:11.501174383Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:11.503166 containerd[1513]: time="2025-05-14T00:09:11.503099115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:11.504428 containerd[1513]: time="2025-05-14T00:09:11.504275305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 542.599992ms" May 14 00:09:11.504428 containerd[1513]: time="2025-05-14T00:09:11.504334312Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 00:09:11.533969 containerd[1513]: time="2025-05-14T00:09:11.533896168Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 00:09:12.116375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261792829.mount: Deactivated successfully. May 14 00:09:15.170452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 26. May 14 00:09:15.173522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:15.333556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:15.346379 (kubelet)[2583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:09:15.440892 kubelet[2583]: E0514 00:09:15.440734 2583 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:09:15.446004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:09:15.446372 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:09:15.446919 systemd[1]: kubelet.service: Consumed 208ms CPU time, 96.6M memory peak. May 14 00:09:16.686713 containerd[1513]: time="2025-05-14T00:09:16.686654778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:16.688098 containerd[1513]: time="2025-05-14T00:09:16.688042834Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238653" May 14 00:09:16.688722 containerd[1513]: time="2025-05-14T00:09:16.688366514Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:16.690880 containerd[1513]: time="2025-05-14T00:09:16.690831365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:16.692164 containerd[1513]: time="2025-05-14T00:09:16.691987887Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.158033814s" May 14 00:09:16.692164 containerd[1513]: time="2025-05-14T00:09:16.692051574Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 00:09:20.172833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:20.173820 systemd[1]: kubelet.service: Consumed 208ms CPU time, 96.6M memory peak. May 14 00:09:20.177644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:20.208358 systemd[1]: Reload requested from client PID 2683 ('systemctl') (unit session-7.scope)... May 14 00:09:20.208383 systemd[1]: Reloading... May 14 00:09:20.318046 zram_generator::config[2728]: No configuration found. May 14 00:09:20.423481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:09:20.526843 systemd[1]: Reloading finished in 317 ms. May 14 00:09:20.571199 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 00:09:20.571280 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 00:09:20.571477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:20.571512 systemd[1]: kubelet.service: Consumed 81ms CPU time, 83M memory peak. May 14 00:09:20.572941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:20.716617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:20.730081 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:09:20.804954 kubelet[2782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:09:20.805706 kubelet[2782]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:09:20.805706 kubelet[2782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:09:20.809752 kubelet[2782]: I0514 00:09:20.809692 2782 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:09:21.124884 kubelet[2782]: I0514 00:09:21.124734 2782 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:09:21.124884 kubelet[2782]: I0514 00:09:21.124775 2782 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:09:21.125243 kubelet[2782]: I0514 00:09:21.125162 2782 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:09:21.160074 kubelet[2782]: I0514 00:09:21.159836 2782 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:09:21.163281 kubelet[2782]: E0514 00:09:21.162799 2782 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://95.217.191.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:21.182467 kubelet[2782]: I0514 00:09:21.182427 2782 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:09:21.186643 kubelet[2782]: I0514 00:09:21.186567 2782 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:09:21.186891 kubelet[2782]: I0514 00:09:21.186615 2782 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-fdde459219","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:09:21.187674 kubelet[2782]: I0514 00:09:21.187621 2782 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:09:21.187674 kubelet[2782]: I0514 00:09:21.187653 2782 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:09:21.189226 kubelet[2782]: I0514 00:09:21.189170 2782 state_mem.go:36] "Initialized new in-memory state store" May 14 00:09:21.190363 kubelet[2782]: I0514 00:09:21.190221 2782 kubelet.go:400] "Attempting to sync node with API server" May 14 00:09:21.190363 kubelet[2782]: I0514 00:09:21.190252 2782 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:09:21.190363 kubelet[2782]: I0514 00:09:21.190284 2782 kubelet.go:312] "Adding apiserver pod source" May 14 00:09:21.190363 kubelet[2782]: I0514 00:09:21.190326 2782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:09:21.196007 kubelet[2782]: W0514 00:09:21.195147 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://95.217.191.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-fdde459219&limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:21.196007 kubelet[2782]: E0514 00:09:21.195236 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://95.217.191.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-fdde459219&limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:21.196007 kubelet[2782]: W0514 00:09:21.195660 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://95.217.191.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:21.196007 kubelet[2782]: E0514 00:09:21.195713 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://95.217.191.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:21.196431 kubelet[2782]: I0514 00:09:21.196389 2782 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:09:21.199898 kubelet[2782]: I0514 00:09:21.199876 2782 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:09:21.200073 kubelet[2782]: W0514 00:09:21.200059 2782 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:09:21.200907 kubelet[2782]: I0514 00:09:21.200888 2782 server.go:1264] "Started kubelet" May 14 00:09:21.208171 kubelet[2782]: I0514 00:09:21.207648 2782 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:09:21.209811 kubelet[2782]: I0514 00:09:21.208976 2782 server.go:455] "Adding debug handlers to kubelet server" May 14 00:09:21.210245 kubelet[2782]: I0514 00:09:21.210162 2782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:09:21.210639 kubelet[2782]: I0514 00:09:21.210618 2782 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:09:21.211582 kubelet[2782]: E0514 00:09:21.211399 2782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://95.217.191.100:6443/api/v1/namespaces/default/events\": dial tcp 95.217.191.100:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-fdde459219.183f3c30e88f8f31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-fdde459219,UID:ci-4284-0-0-n-fdde459219,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-fdde459219,},FirstTimestamp:2025-05-14 00:09:21.200860977 +0000 UTC m=+0.463736986,LastTimestamp:2025-05-14 00:09:21.200860977 +0000 UTC m=+0.463736986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-fdde459219,}" May 14 00:09:21.214609 kubelet[2782]: I0514 00:09:21.212428 2782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:09:21.216185 kubelet[2782]: I0514 00:09:21.216077 2782 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:09:21.217640 kubelet[2782]: I0514 00:09:21.216767 2782 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:09:21.217640 kubelet[2782]: I0514 00:09:21.216843 2782 reconciler.go:26] "Reconciler: start to sync state" May 14 00:09:21.221374 kubelet[2782]: W0514 00:09:21.221305 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://95.217.191.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:21.221457 kubelet[2782]: E0514 00:09:21.221384 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://95.217.191.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:21.226906 kubelet[2782]: E0514 00:09:21.226853 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.191.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-fdde459219?timeout=10s\": dial tcp 95.217.191.100:6443: connect: connection refused" interval="200ms" May 14 00:09:21.234148 kubelet[2782]: I0514 00:09:21.234118 2782 factory.go:221] Registration of the containerd container factory successfully May 14 00:09:21.234148 kubelet[2782]: I0514 00:09:21.234142 2782 factory.go:221] Registration of the systemd container factory successfully May 14 00:09:21.234285 kubelet[2782]: I0514 00:09:21.234240 2782 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:09:21.256846 kubelet[2782]: I0514 00:09:21.256816 2782 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:09:21.256846 kubelet[2782]: I0514 00:09:21.256838 2782 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:09:21.256846 kubelet[2782]: I0514 00:09:21.256855 2782 state_mem.go:36] "Initialized new in-memory state store" May 14 00:09:21.258359 kubelet[2782]: I0514 00:09:21.258328 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:09:21.259745 kubelet[2782]: I0514 00:09:21.259440 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:09:21.259745 kubelet[2782]: I0514 00:09:21.259463 2782 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:09:21.259745 kubelet[2782]: I0514 00:09:21.259480 2782 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:09:21.259745 kubelet[2782]: E0514 00:09:21.259513 2782 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:09:21.260240 kubelet[2782]: I0514 00:09:21.260228 2782 policy_none.go:49] "None policy: Start" May 14 00:09:21.262001 kubelet[2782]: W0514 00:09:21.261980 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://95.217.191.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:21.262119 kubelet[2782]: E0514 00:09:21.262106 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://95.217.191.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:21.263131 kubelet[2782]: I0514 00:09:21.263102 2782 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:09:21.263131 kubelet[2782]: I0514 00:09:21.263126 2782 state_mem.go:35] "Initializing new in-memory state store" May 14 00:09:21.268581 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:09:21.287466 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:09:21.299255 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:09:21.301040 kubelet[2782]: I0514 00:09:21.300399 2782 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:09:21.301040 kubelet[2782]: I0514 00:09:21.300557 2782 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:09:21.301040 kubelet[2782]: I0514 00:09:21.300639 2782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:09:21.302294 kubelet[2782]: E0514 00:09:21.302285 2782 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-fdde459219\" not found" May 14 00:09:21.318376 kubelet[2782]: I0514 00:09:21.318337 2782 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-fdde459219" May 14 00:09:21.318773 kubelet[2782]: E0514 00:09:21.318740 2782 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://95.217.191.100:6443/api/v1/nodes\": dial tcp 95.217.191.100:6443: connect: connection refused" node="ci-4284-0-0-n-fdde459219" May 14 00:09:21.361259 kubelet[2782]: I0514 00:09:21.361117 2782 topology_manager.go:215] "Topology Admit Handler" podUID="3f67c79af95b0bf9aa451f8b5289ba83" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-n-fdde459219" May 14 00:09:21.364742 kubelet[2782]: I0514 00:09:21.364447 2782 topology_manager.go:215] "Topology Admit Handler" podUID="d876bbda8581654a37f2742b0c72b06a" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:21.369354 kubelet[2782]: I0514 00:09:21.369306 2782 topology_manager.go:215] "Topology Admit Handler" podUID="518627cc0b31f0b02b35cf66120496ea" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-n-fdde459219" May 14 00:09:21.381690 systemd[1]: Created slice kubepods-burstable-pod3f67c79af95b0bf9aa451f8b5289ba83.slice - libcontainer container kubepods-burstable-pod3f67c79af95b0bf9aa451f8b5289ba83.slice. May 14 00:09:21.405667 systemd[1]: Created slice kubepods-burstable-podd876bbda8581654a37f2742b0c72b06a.slice - libcontainer container kubepods-burstable-podd876bbda8581654a37f2742b0c72b06a.slice. May 14 00:09:21.418786 systemd[1]: Created slice kubepods-burstable-pod518627cc0b31f0b02b35cf66120496ea.slice - libcontainer container kubepods-burstable-pod518627cc0b31f0b02b35cf66120496ea.slice. May 14 00:09:21.427701 kubelet[2782]: E0514 00:09:21.427623 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.191.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-fdde459219?timeout=10s\": dial tcp 95.217.191.100:6443: connect: connection refused" interval="400ms" May 14 00:09:21.518333 kubelet[2782]: I0514 00:09:21.518183 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:21.518333 kubelet[2782]: I0514 00:09:21.518270 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:21.518333 kubelet[2782]: I0514 00:09:21.518309 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f67c79af95b0bf9aa451f8b5289ba83-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-fdde459219\" (UID: \"3f67c79af95b0bf9aa451f8b5289ba83\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-fdde459219" May 14 00:09:21.518333 kubelet[2782]: I0514 00:09:21.518339 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:21.518872 kubelet[2782]: I0514 00:09:21.518371 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:21.518872 kubelet[2782]: I0514 00:09:21.518399 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:21.518872 kubelet[2782]: I0514 00:09:21.518426 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/518627cc0b31f0b02b35cf66120496ea-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-fdde459219\" (UID: \"518627cc0b31f0b02b35cf66120496ea\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-fdde459219" May 14 00:09:21.518872 kubelet[2782]: I0514 00:09:21.518471 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f67c79af95b0bf9aa451f8b5289ba83-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-fdde459219\" (UID: \"3f67c79af95b0bf9aa451f8b5289ba83\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-fdde459219" May 14 00:09:21.518872 kubelet[2782]: I0514 00:09:21.518496 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f67c79af95b0bf9aa451f8b5289ba83-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-fdde459219\" (UID: \"3f67c79af95b0bf9aa451f8b5289ba83\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-fdde459219" May 14 00:09:21.521711 kubelet[2782]: I0514 00:09:21.521686 2782 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-fdde459219" May 14 00:09:21.522270 kubelet[2782]: E0514 00:09:21.522206 2782 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://95.217.191.100:6443/api/v1/nodes\": dial tcp 95.217.191.100:6443: connect: connection refused" node="ci-4284-0-0-n-fdde459219" May 14 00:09:21.701788 containerd[1513]: time="2025-05-14T00:09:21.701718762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-fdde459219,Uid:3f67c79af95b0bf9aa451f8b5289ba83,Namespace:kube-system,Attempt:0,}" May 14 00:09:21.713810 containerd[1513]: time="2025-05-14T00:09:21.713474915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-fdde459219,Uid:d876bbda8581654a37f2742b0c72b06a,Namespace:kube-system,Attempt:0,}" May 14 00:09:21.724005 containerd[1513]: time="2025-05-14T00:09:21.723756495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-fdde459219,Uid:518627cc0b31f0b02b35cf66120496ea,Namespace:kube-system,Attempt:0,}" May 14 00:09:21.828527 kubelet[2782]: E0514 00:09:21.828441 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.191.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-fdde459219?timeout=10s\": dial tcp 95.217.191.100:6443: connect: connection refused" interval="800ms" May 14 00:09:21.925745 kubelet[2782]: I0514 00:09:21.925698 2782 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-fdde459219" May 14 00:09:21.926199 kubelet[2782]: E0514 00:09:21.926140 2782 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://95.217.191.100:6443/api/v1/nodes\": dial tcp 95.217.191.100:6443: connect: connection refused" node="ci-4284-0-0-n-fdde459219" May 14 00:09:22.108493 kubelet[2782]: W0514 00:09:22.108288 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://95.217.191.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:22.108493 kubelet[2782]: E0514 00:09:22.108403 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://95.217.191.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:22.200922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174393898.mount: Deactivated successfully. May 14 00:09:22.212688 containerd[1513]: time="2025-05-14T00:09:22.212584258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:09:22.214916 containerd[1513]: time="2025-05-14T00:09:22.214799460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" May 14 00:09:22.218070 containerd[1513]: time="2025-05-14T00:09:22.216617082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:09:22.218070 containerd[1513]: time="2025-05-14T00:09:22.217724583Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:09:22.219804 containerd[1513]: time="2025-05-14T00:09:22.219704855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:09:22.221302 containerd[1513]: time="2025-05-14T00:09:22.221215353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:09:22.221462 containerd[1513]: time="2025-05-14T00:09:22.221420048Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:09:22.225832 containerd[1513]: time="2025-05-14T00:09:22.225739840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:09:22.228609 containerd[1513]: time="2025-05-14T00:09:22.228531118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.446885ms" May 14 00:09:22.230819 containerd[1513]: time="2025-05-14T00:09:22.230688564Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 513.023461ms" May 14 00:09:22.232115 containerd[1513]: time="2025-05-14T00:09:22.231701882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 524.900895ms" May 14 00:09:22.366550 containerd[1513]: time="2025-05-14T00:09:22.364063062Z" level=info msg="connecting to shim f9a0e9ba6a3f2d476af4922b8c8bf9404339b2c43df6cc4432a4e8f9850bb8ad" address="unix:///run/containerd/s/fe3a45e467981e6bad1cffcbe01792ed01d309b51cdb1251b9a281f9e497b85d" namespace=k8s.io protocol=ttrpc version=3 May 14 00:09:22.375808 containerd[1513]: time="2025-05-14T00:09:22.375735885Z" level=info msg="connecting to shim 2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289" address="unix:///run/containerd/s/37e02c690ecdbea2650114137114b9555805c85f740364494a3e43cc392ba331" namespace=k8s.io protocol=ttrpc version=3 May 14 00:09:22.383769 containerd[1513]: time="2025-05-14T00:09:22.383557898Z" level=info msg="connecting to shim ad9800ca077204fb24b5ed3c33039e327bc41c575ec0a36d27830fce576bd2a4" address="unix:///run/containerd/s/02b38001ffbd3c83a209a5776b1fb3f93c1468436286f8f34615b0c9e45358d2" namespace=k8s.io protocol=ttrpc version=3 May 14 00:09:22.473240 systemd[1]: Started cri-containerd-f9a0e9ba6a3f2d476af4922b8c8bf9404339b2c43df6cc4432a4e8f9850bb8ad.scope - libcontainer container f9a0e9ba6a3f2d476af4922b8c8bf9404339b2c43df6cc4432a4e8f9850bb8ad. May 14 00:09:22.477929 kubelet[2782]: W0514 00:09:22.477880 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://95.217.191.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:22.477929 kubelet[2782]: E0514 00:09:22.477921 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://95.217.191.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:22.479374 systemd[1]: Started cri-containerd-2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289.scope - libcontainer container 2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289. May 14 00:09:22.482515 systemd[1]: Started cri-containerd-ad9800ca077204fb24b5ed3c33039e327bc41c575ec0a36d27830fce576bd2a4.scope - libcontainer container ad9800ca077204fb24b5ed3c33039e327bc41c575ec0a36d27830fce576bd2a4. May 14 00:09:22.543871 containerd[1513]: time="2025-05-14T00:09:22.543586558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-fdde459219,Uid:3f67c79af95b0bf9aa451f8b5289ba83,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9a0e9ba6a3f2d476af4922b8c8bf9404339b2c43df6cc4432a4e8f9850bb8ad\"" May 14 00:09:22.551976 containerd[1513]: time="2025-05-14T00:09:22.551006043Z" level=info msg="CreateContainer within sandbox \"f9a0e9ba6a3f2d476af4922b8c8bf9404339b2c43df6cc4432a4e8f9850bb8ad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:09:22.566459 kubelet[2782]: W0514 00:09:22.566085 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://95.217.191.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-fdde459219&limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:22.566459 kubelet[2782]: E0514 00:09:22.566249 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://95.217.191.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-fdde459219&limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:22.572519 containerd[1513]: time="2025-05-14T00:09:22.572454050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-fdde459219,Uid:d876bbda8581654a37f2742b0c72b06a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289\"" May 14 00:09:22.576192 containerd[1513]: time="2025-05-14T00:09:22.576132997Z" level=info msg="CreateContainer within sandbox \"2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:09:22.579633 containerd[1513]: time="2025-05-14T00:09:22.579605233Z" level=info msg="Container 1e597936f54823210e782f2bfc97eb352d930c6342ac18fafd5651c79006876e: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:22.585990 containerd[1513]: time="2025-05-14T00:09:22.585932255Z" level=info msg="Container 6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:22.586868 containerd[1513]: time="2025-05-14T00:09:22.586840300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-fdde459219,Uid:518627cc0b31f0b02b35cf66120496ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad9800ca077204fb24b5ed3c33039e327bc41c575ec0a36d27830fce576bd2a4\"" May 14 00:09:22.589579 containerd[1513]: time="2025-05-14T00:09:22.589518520Z" level=info msg="CreateContainer within sandbox \"ad9800ca077204fb24b5ed3c33039e327bc41c575ec0a36d27830fce576bd2a4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:09:22.594566 containerd[1513]: time="2025-05-14T00:09:22.594223879Z" level=info msg="CreateContainer within sandbox \"f9a0e9ba6a3f2d476af4922b8c8bf9404339b2c43df6cc4432a4e8f9850bb8ad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e597936f54823210e782f2bfc97eb352d930c6342ac18fafd5651c79006876e\"" May 14 00:09:22.594566 containerd[1513]: time="2025-05-14T00:09:22.594458790Z" level=info msg="CreateContainer within sandbox \"2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7\"" May 14 00:09:22.594945 containerd[1513]: time="2025-05-14T00:09:22.594745906Z" level=info msg="StartContainer for \"1e597936f54823210e782f2bfc97eb352d930c6342ac18fafd5651c79006876e\"" May 14 00:09:22.595211 containerd[1513]: time="2025-05-14T00:09:22.595194007Z" level=info msg="StartContainer for \"6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7\"" May 14 00:09:22.595894 containerd[1513]: time="2025-05-14T00:09:22.595852675Z" level=info msg="connecting to shim 1e597936f54823210e782f2bfc97eb352d930c6342ac18fafd5651c79006876e" address="unix:///run/containerd/s/fe3a45e467981e6bad1cffcbe01792ed01d309b51cdb1251b9a281f9e497b85d" protocol=ttrpc version=3 May 14 00:09:22.597085 containerd[1513]: time="2025-05-14T00:09:22.596583575Z" level=info msg="connecting to shim 6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7" address="unix:///run/containerd/s/37e02c690ecdbea2650114137114b9555805c85f740364494a3e43cc392ba331" protocol=ttrpc version=3 May 14 00:09:22.610876 containerd[1513]: time="2025-05-14T00:09:22.610830679Z" level=info msg="Container bbb26237b7adc9d210c1a77fc1ca3d3df36e3e3e9073b44a2382acbd024c47f2: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:22.614462 systemd[1]: Started cri-containerd-6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7.scope - libcontainer container 6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7. May 14 00:09:22.628284 containerd[1513]: time="2025-05-14T00:09:22.626971054Z" level=info msg="CreateContainer within sandbox \"ad9800ca077204fb24b5ed3c33039e327bc41c575ec0a36d27830fce576bd2a4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bbb26237b7adc9d210c1a77fc1ca3d3df36e3e3e9073b44a2382acbd024c47f2\"" May 14 00:09:22.628284 containerd[1513]: time="2025-05-14T00:09:22.627955860Z" level=info msg="StartContainer for \"bbb26237b7adc9d210c1a77fc1ca3d3df36e3e3e9073b44a2382acbd024c47f2\"" May 14 00:09:22.628569 systemd[1]: Started cri-containerd-1e597936f54823210e782f2bfc97eb352d930c6342ac18fafd5651c79006876e.scope - libcontainer container 1e597936f54823210e782f2bfc97eb352d930c6342ac18fafd5651c79006876e. May 14 00:09:22.629882 kubelet[2782]: E0514 00:09:22.629631 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.191.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-fdde459219?timeout=10s\": dial tcp 95.217.191.100:6443: connect: connection refused" interval="1.6s" May 14 00:09:22.629951 containerd[1513]: time="2025-05-14T00:09:22.629824867Z" level=info msg="connecting to shim bbb26237b7adc9d210c1a77fc1ca3d3df36e3e3e9073b44a2382acbd024c47f2" address="unix:///run/containerd/s/02b38001ffbd3c83a209a5776b1fb3f93c1468436286f8f34615b0c9e45358d2" protocol=ttrpc version=3 May 14 00:09:22.651283 systemd[1]: Started cri-containerd-bbb26237b7adc9d210c1a77fc1ca3d3df36e3e3e9073b44a2382acbd024c47f2.scope - libcontainer container bbb26237b7adc9d210c1a77fc1ca3d3df36e3e3e9073b44a2382acbd024c47f2. May 14 00:09:22.684350 containerd[1513]: time="2025-05-14T00:09:22.684286739Z" level=info msg="StartContainer for \"6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7\" returns successfully" May 14 00:09:22.713778 kubelet[2782]: W0514 00:09:22.713723 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://95.217.191.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:22.714729 kubelet[2782]: E0514 00:09:22.714596 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://95.217.191.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.217.191.100:6443: connect: connection refused May 14 00:09:22.733270 kubelet[2782]: I0514 00:09:22.732700 2782 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-fdde459219" May 14 00:09:22.733270 kubelet[2782]: E0514 00:09:22.733227 2782 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://95.217.191.100:6443/api/v1/nodes\": dial tcp 95.217.191.100:6443: connect: connection refused" node="ci-4284-0-0-n-fdde459219" May 14 00:09:22.736599 containerd[1513]: time="2025-05-14T00:09:22.736470123Z" level=info msg="StartContainer for \"bbb26237b7adc9d210c1a77fc1ca3d3df36e3e3e9073b44a2382acbd024c47f2\" returns successfully" May 14 00:09:22.755558 containerd[1513]: time="2025-05-14T00:09:22.755475642Z" level=info msg="StartContainer for \"1e597936f54823210e782f2bfc97eb352d930c6342ac18fafd5651c79006876e\" returns successfully" May 14 00:09:24.337000 kubelet[2782]: I0514 00:09:24.336560 2782 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-fdde459219" May 14 00:09:24.708837 kubelet[2782]: E0514 00:09:24.708787 2782 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284-0-0-n-fdde459219\" not found" node="ci-4284-0-0-n-fdde459219" May 14 00:09:24.833044 kubelet[2782]: I0514 00:09:24.832467 2782 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-n-fdde459219" May 14 00:09:24.853530 kubelet[2782]: E0514 00:09:24.853490 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-fdde459219\" not found" May 14 00:09:24.953909 kubelet[2782]: E0514 00:09:24.953856 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-fdde459219\" not found" May 14 00:09:25.198552 kubelet[2782]: I0514 00:09:25.198466 2782 apiserver.go:52] "Watching apiserver" May 14 00:09:25.217995 kubelet[2782]: I0514 00:09:25.217917 2782 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:09:27.192867 systemd[1]: Reload requested from client PID 3053 ('systemctl') (unit session-7.scope)... May 14 00:09:27.192894 systemd[1]: Reloading... May 14 00:09:27.346137 zram_generator::config[3098]: No configuration found. May 14 00:09:27.462210 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:09:27.605103 systemd[1]: Reloading finished in 411 ms. May 14 00:09:27.634432 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:27.644706 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:09:27.645151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:27.645268 systemd[1]: kubelet.service: Consumed 956ms CPU time, 115.2M memory peak. May 14 00:09:27.647792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:27.777264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:27.786743 (kubelet)[3148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:09:27.903237 kubelet[3148]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:09:27.903237 kubelet[3148]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:09:27.903237 kubelet[3148]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:09:27.903708 kubelet[3148]: I0514 00:09:27.903291 3148 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:09:27.913719 kubelet[3148]: I0514 00:09:27.913641 3148 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:09:27.913719 kubelet[3148]: I0514 00:09:27.913676 3148 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:09:27.914224 kubelet[3148]: I0514 00:09:27.913997 3148 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:09:27.915728 kubelet[3148]: I0514 00:09:27.915689 3148 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:09:27.917187 kubelet[3148]: I0514 00:09:27.917118 3148 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:09:27.923657 kubelet[3148]: I0514 00:09:27.923625 3148 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:09:27.923915 kubelet[3148]: I0514 00:09:27.923840 3148 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:09:27.924175 kubelet[3148]: I0514 00:09:27.923862 3148 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-fdde459219","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:09:27.924175 kubelet[3148]: I0514 00:09:27.924112 3148 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:09:27.924175 kubelet[3148]: I0514 00:09:27.924121 3148 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:09:27.927436 kubelet[3148]: I0514 00:09:27.927364 3148 state_mem.go:36] "Initialized new in-memory state store" May 14 00:09:27.927608 kubelet[3148]: I0514 00:09:27.927596 3148 kubelet.go:400] "Attempting to sync node with API server" May 14 00:09:27.927662 kubelet[3148]: I0514 00:09:27.927617 3148 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:09:27.931079 kubelet[3148]: I0514 00:09:27.929150 3148 kubelet.go:312] "Adding apiserver pod source" May 14 00:09:27.931079 kubelet[3148]: I0514 00:09:27.929202 3148 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:09:27.935197 kubelet[3148]: I0514 00:09:27.935159 3148 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:09:27.935385 kubelet[3148]: I0514 00:09:27.935342 3148 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:09:27.935749 kubelet[3148]: I0514 00:09:27.935728 3148 server.go:1264] "Started kubelet" May 14 00:09:27.942907 kubelet[3148]: I0514 00:09:27.942864 3148 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:09:27.953059 kubelet[3148]: E0514 00:09:27.951488 3148 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:09:27.958358 kubelet[3148]: I0514 00:09:27.958109 3148 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:09:27.960502 kubelet[3148]: I0514 00:09:27.960475 3148 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:09:27.963914 kubelet[3148]: I0514 00:09:27.963832 3148 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:09:27.964441 kubelet[3148]: I0514 00:09:27.964409 3148 reconciler.go:26] "Reconciler: start to sync state" May 14 00:09:27.966669 kubelet[3148]: I0514 00:09:27.966525 3148 server.go:455] "Adding debug handlers to kubelet server" May 14 00:09:27.970456 kubelet[3148]: I0514 00:09:27.970397 3148 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:09:27.970800 kubelet[3148]: I0514 00:09:27.970788 3148 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:09:27.972075 kubelet[3148]: I0514 00:09:27.972055 3148 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:09:27.974276 kubelet[3148]: I0514 00:09:27.974259 3148 factory.go:221] Registration of the containerd container factory successfully May 14 00:09:27.974396 kubelet[3148]: I0514 00:09:27.974389 3148 factory.go:221] Registration of the systemd container factory successfully May 14 00:09:27.981403 kubelet[3148]: I0514 00:09:27.981363 3148 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:09:27.982511 kubelet[3148]: I0514 00:09:27.982478 3148 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:09:27.982511 kubelet[3148]: I0514 00:09:27.982512 3148 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:09:27.982604 kubelet[3148]: I0514 00:09:27.982530 3148 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:09:27.982604 kubelet[3148]: E0514 00:09:27.982569 3148 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:09:28.017221 kubelet[3148]: I0514 00:09:28.017133 3148 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:09:28.017221 kubelet[3148]: I0514 00:09:28.017167 3148 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:09:28.017221 kubelet[3148]: I0514 00:09:28.017209 3148 state_mem.go:36] "Initialized new in-memory state store" May 14 00:09:28.017416 kubelet[3148]: I0514 00:09:28.017379 3148 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:09:28.017416 kubelet[3148]: I0514 00:09:28.017390 3148 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:09:28.017416 kubelet[3148]: I0514 00:09:28.017408 3148 policy_none.go:49] "None policy: Start" May 14 00:09:28.018223 kubelet[3148]: I0514 00:09:28.018188 3148 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:09:28.018223 kubelet[3148]: I0514 00:09:28.018219 3148 state_mem.go:35] "Initializing new in-memory state store" May 14 00:09:28.018499 kubelet[3148]: I0514 00:09:28.018478 3148 state_mem.go:75] "Updated machine memory state" May 14 00:09:28.022767 kubelet[3148]: I0514 00:09:28.022730 3148 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:09:28.022930 kubelet[3148]: I0514 00:09:28.022897 3148 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:09:28.023230 kubelet[3148]: I0514 00:09:28.023004 3148 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:09:28.064844 kubelet[3148]: I0514 00:09:28.062763 3148 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-fdde459219" May 14 00:09:28.072983 kubelet[3148]: I0514 00:09:28.072895 3148 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284-0-0-n-fdde459219" May 14 00:09:28.073189 kubelet[3148]: I0514 00:09:28.073063 3148 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-n-fdde459219" May 14 00:09:28.083208 kubelet[3148]: I0514 00:09:28.083143 3148 topology_manager.go:215] "Topology Admit Handler" podUID="3f67c79af95b0bf9aa451f8b5289ba83" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-n-fdde459219" May 14 00:09:28.083375 kubelet[3148]: I0514 00:09:28.083294 3148 topology_manager.go:215] "Topology Admit Handler" podUID="d876bbda8581654a37f2742b0c72b06a" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:28.084113 kubelet[3148]: I0514 00:09:28.084083 3148 topology_manager.go:215] "Topology Admit Handler" podUID="518627cc0b31f0b02b35cf66120496ea" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-n-fdde459219" May 14 00:09:28.099276 kubelet[3148]: E0514 00:09:28.098569 3148 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" already exists" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:28.099917 kubelet[3148]: E0514 00:09:28.099859 3148 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4284-0-0-n-fdde459219\" already exists" pod="kube-system/kube-scheduler-ci-4284-0-0-n-fdde459219" May 14 00:09:28.166397 kubelet[3148]: I0514 00:09:28.166338 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f67c79af95b0bf9aa451f8b5289ba83-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-fdde459219\" (UID: \"3f67c79af95b0bf9aa451f8b5289ba83\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-fdde459219" May 14 00:09:28.166397 kubelet[3148]: I0514 00:09:28.166394 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f67c79af95b0bf9aa451f8b5289ba83-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-fdde459219\" (UID: \"3f67c79af95b0bf9aa451f8b5289ba83\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-fdde459219" May 14 00:09:28.166565 kubelet[3148]: I0514 00:09:28.166430 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f67c79af95b0bf9aa451f8b5289ba83-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-fdde459219\" (UID: \"3f67c79af95b0bf9aa451f8b5289ba83\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-fdde459219" May 14 00:09:28.166565 kubelet[3148]: I0514 00:09:28.166456 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:28.166565 kubelet[3148]: I0514 00:09:28.166481 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:28.166565 kubelet[3148]: I0514 00:09:28.166505 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:28.166565 kubelet[3148]: I0514 00:09:28.166529 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/518627cc0b31f0b02b35cf66120496ea-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-fdde459219\" (UID: \"518627cc0b31f0b02b35cf66120496ea\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-fdde459219" May 14 00:09:28.166672 kubelet[3148]: I0514 00:09:28.166554 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:28.166672 kubelet[3148]: I0514 00:09:28.166578 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d876bbda8581654a37f2742b0c72b06a-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-fdde459219\" (UID: \"d876bbda8581654a37f2742b0c72b06a\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" May 14 00:09:28.187400 sudo[3180]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 00:09:28.187654 sudo[3180]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 00:09:28.722355 sudo[3180]: pam_unix(sudo:session): session closed for user root May 14 00:09:28.931868 kubelet[3148]: I0514 00:09:28.930317 3148 apiserver.go:52] "Watching apiserver" May 14 00:09:28.964936 kubelet[3148]: I0514 00:09:28.964831 3148 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:09:29.025782 kubelet[3148]: E0514 00:09:29.025618 3148 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284-0-0-n-fdde459219\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-fdde459219" May 14 00:09:29.068865 kubelet[3148]: I0514 00:09:29.068319 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-fdde459219" podStartSLOduration=1.068279685 podStartE2EDuration="1.068279685s" podCreationTimestamp="2025-05-14 00:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:09:29.055116305 +0000 UTC m=+1.235429496" watchObservedRunningTime="2025-05-14 00:09:29.068279685 +0000 UTC m=+1.248592886" May 14 00:09:29.088405 kubelet[3148]: I0514 00:09:29.088149 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-fdde459219" podStartSLOduration=2.08812027 podStartE2EDuration="2.08812027s" podCreationTimestamp="2025-05-14 00:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:09:29.069648154 +0000 UTC m=+1.249961356" watchObservedRunningTime="2025-05-14 00:09:29.08812027 +0000 UTC m=+1.268433461" May 14 00:09:29.113903 kubelet[3148]: I0514 00:09:29.113724 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-fdde459219" podStartSLOduration=3.113698511 podStartE2EDuration="3.113698511s" podCreationTimestamp="2025-05-14 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:09:29.089261943 +0000 UTC m=+1.269575133" watchObservedRunningTime="2025-05-14 00:09:29.113698511 +0000 UTC m=+1.294011703" May 14 00:09:30.474299 sudo[2135]: pam_unix(sudo:session): session closed for user root May 14 00:09:30.633054 sshd[2134]: Connection closed by 139.178.89.65 port 48930 May 14 00:09:30.634881 sshd-session[2132]: pam_unix(sshd:session): session closed for user core May 14 00:09:30.639063 systemd[1]: sshd@7-95.217.191.100:22-139.178.89.65:48930.service: Deactivated successfully. May 14 00:09:30.642539 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:09:30.642753 systemd[1]: session-7.scope: Consumed 5.793s CPU time, 229.3M memory peak. May 14 00:09:30.645080 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. May 14 00:09:30.646571 systemd-logind[1497]: Removed session 7. May 14 00:09:42.357373 kubelet[3148]: I0514 00:09:42.357089 3148 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:09:42.359502 kubelet[3148]: I0514 00:09:42.358001 3148 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:09:42.359617 containerd[1513]: time="2025-05-14T00:09:42.357578170Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:09:42.868583 kubelet[3148]: I0514 00:09:42.867963 3148 topology_manager.go:215] "Topology Admit Handler" podUID="b450014c-dea3-4239-8126-6384b4c2ff28" podNamespace="kube-system" podName="kube-proxy-k5tvw" May 14 00:09:42.894722 systemd[1]: Created slice kubepods-besteffort-podb450014c_dea3_4239_8126_6384b4c2ff28.slice - libcontainer container kubepods-besteffort-podb450014c_dea3_4239_8126_6384b4c2ff28.slice. May 14 00:09:42.905122 kubelet[3148]: I0514 00:09:42.904371 3148 topology_manager.go:215] "Topology Admit Handler" podUID="c56b5a78-42f9-4058-9b4b-3aab2a24d615" podNamespace="kube-system" podName="cilium-x5m89" May 14 00:09:42.914573 systemd[1]: Created slice kubepods-burstable-podc56b5a78_42f9_4058_9b4b_3aab2a24d615.slice - libcontainer container kubepods-burstable-podc56b5a78_42f9_4058_9b4b_3aab2a24d615.slice. May 14 00:09:42.958127 kubelet[3148]: I0514 00:09:42.958069 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cni-path\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958127 kubelet[3148]: I0514 00:09:42.958103 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-lib-modules\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958345 kubelet[3148]: I0514 00:09:42.958182 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-xtables-lock\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958345 kubelet[3148]: I0514 00:09:42.958234 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vdpd\" (UniqueName: \"kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-kube-api-access-4vdpd\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958345 kubelet[3148]: I0514 00:09:42.958255 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jw49\" (UniqueName: \"kubernetes.io/projected/b450014c-dea3-4239-8126-6384b4c2ff28-kube-api-access-8jw49\") pod \"kube-proxy-k5tvw\" (UID: \"b450014c-dea3-4239-8126-6384b4c2ff28\") " pod="kube-system/kube-proxy-k5tvw" May 14 00:09:42.958345 kubelet[3148]: I0514 00:09:42.958268 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b450014c-dea3-4239-8126-6384b4c2ff28-kube-proxy\") pod \"kube-proxy-k5tvw\" (UID: \"b450014c-dea3-4239-8126-6384b4c2ff28\") " pod="kube-system/kube-proxy-k5tvw" May 14 00:09:42.958345 kubelet[3148]: I0514 00:09:42.958340 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b450014c-dea3-4239-8126-6384b4c2ff28-lib-modules\") pod \"kube-proxy-k5tvw\" (UID: \"b450014c-dea3-4239-8126-6384b4c2ff28\") " pod="kube-system/kube-proxy-k5tvw" May 14 00:09:42.958533 kubelet[3148]: I0514 00:09:42.958357 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-etc-cni-netd\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958533 kubelet[3148]: I0514 00:09:42.958371 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c56b5a78-42f9-4058-9b4b-3aab2a24d615-clustermesh-secrets\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958533 kubelet[3148]: I0514 00:09:42.958418 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-bpf-maps\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958533 kubelet[3148]: I0514 00:09:42.958432 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b450014c-dea3-4239-8126-6384b4c2ff28-xtables-lock\") pod \"kube-proxy-k5tvw\" (UID: \"b450014c-dea3-4239-8126-6384b4c2ff28\") " pod="kube-system/kube-proxy-k5tvw" May 14 00:09:42.958533 kubelet[3148]: I0514 00:09:42.958447 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-hostproc\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958533 kubelet[3148]: I0514 00:09:42.958461 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-host-proc-sys-net\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958747 kubelet[3148]: I0514 00:09:42.958512 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-config-path\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958747 kubelet[3148]: I0514 00:09:42.958530 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-run\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958747 kubelet[3148]: I0514 00:09:42.958542 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-cgroup\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958747 kubelet[3148]: I0514 00:09:42.958555 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-host-proc-sys-kernel\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:42.958747 kubelet[3148]: I0514 00:09:42.958600 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-hubble-tls\") pod \"cilium-x5m89\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " pod="kube-system/cilium-x5m89" May 14 00:09:43.106448 kubelet[3148]: E0514 00:09:43.103574 3148 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 14 00:09:43.106448 kubelet[3148]: E0514 00:09:43.103673 3148 projected.go:200] Error preparing data for projected volume kube-api-access-4vdpd for pod kube-system/cilium-x5m89: configmap "kube-root-ca.crt" not found May 14 00:09:43.106448 kubelet[3148]: E0514 00:09:43.103765 3148 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-kube-api-access-4vdpd podName:c56b5a78-42f9-4058-9b4b-3aab2a24d615 nodeName:}" failed. No retries permitted until 2025-05-14 00:09:43.603736419 +0000 UTC m=+15.784049610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4vdpd" (UniqueName: "kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-kube-api-access-4vdpd") pod "cilium-x5m89" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615") : configmap "kube-root-ca.crt" not found May 14 00:09:43.106448 kubelet[3148]: E0514 00:09:43.104124 3148 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 14 00:09:43.106448 kubelet[3148]: E0514 00:09:43.104190 3148 projected.go:200] Error preparing data for projected volume kube-api-access-8jw49 for pod kube-system/kube-proxy-k5tvw: configmap "kube-root-ca.crt" not found May 14 00:09:43.106448 kubelet[3148]: E0514 00:09:43.104227 3148 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b450014c-dea3-4239-8126-6384b4c2ff28-kube-api-access-8jw49 podName:b450014c-dea3-4239-8126-6384b4c2ff28 nodeName:}" failed. No retries permitted until 2025-05-14 00:09:43.604214285 +0000 UTC m=+15.784527486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8jw49" (UniqueName: "kubernetes.io/projected/b450014c-dea3-4239-8126-6384b4c2ff28-kube-api-access-8jw49") pod "kube-proxy-k5tvw" (UID: "b450014c-dea3-4239-8126-6384b4c2ff28") : configmap "kube-root-ca.crt" not found May 14 00:09:43.385787 kubelet[3148]: I0514 00:09:43.385728 3148 topology_manager.go:215] "Topology Admit Handler" podUID="d15f95ec-8ce1-4674-8592-bdaecca7c346" podNamespace="kube-system" podName="cilium-operator-599987898-wc2x2" May 14 00:09:43.401266 systemd[1]: Created slice kubepods-besteffort-podd15f95ec_8ce1_4674_8592_bdaecca7c346.slice - libcontainer container kubepods-besteffort-podd15f95ec_8ce1_4674_8592_bdaecca7c346.slice. May 14 00:09:43.462799 kubelet[3148]: I0514 00:09:43.462745 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b225j\" (UniqueName: \"kubernetes.io/projected/d15f95ec-8ce1-4674-8592-bdaecca7c346-kube-api-access-b225j\") pod \"cilium-operator-599987898-wc2x2\" (UID: \"d15f95ec-8ce1-4674-8592-bdaecca7c346\") " pod="kube-system/cilium-operator-599987898-wc2x2" May 14 00:09:43.463045 kubelet[3148]: I0514 00:09:43.462836 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d15f95ec-8ce1-4674-8592-bdaecca7c346-cilium-config-path\") pod \"cilium-operator-599987898-wc2x2\" (UID: \"d15f95ec-8ce1-4674-8592-bdaecca7c346\") " pod="kube-system/cilium-operator-599987898-wc2x2" May 14 00:09:43.708043 containerd[1513]: time="2025-05-14T00:09:43.707950269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wc2x2,Uid:d15f95ec-8ce1-4674-8592-bdaecca7c346,Namespace:kube-system,Attempt:0,}" May 14 00:09:43.734098 containerd[1513]: time="2025-05-14T00:09:43.733247838Z" level=info msg="connecting to shim e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8" address="unix:///run/containerd/s/722d208bb8e02ded334ea0b84b7afd5b01134279317e9e322b6b705757ee136c" namespace=k8s.io protocol=ttrpc version=3 May 14 00:09:43.771370 systemd[1]: Started cri-containerd-e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8.scope - libcontainer container e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8. May 14 00:09:43.808052 containerd[1513]: time="2025-05-14T00:09:43.806933184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5tvw,Uid:b450014c-dea3-4239-8126-6384b4c2ff28,Namespace:kube-system,Attempt:0,}" May 14 00:09:43.822513 containerd[1513]: time="2025-05-14T00:09:43.821870429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5m89,Uid:c56b5a78-42f9-4058-9b4b-3aab2a24d615,Namespace:kube-system,Attempt:0,}" May 14 00:09:43.835102 containerd[1513]: time="2025-05-14T00:09:43.834783580Z" level=info msg="connecting to shim 3703925220b58aca4e1830bc2b9964060f54c6b2e5113924d3e983dbf2a7130f" address="unix:///run/containerd/s/df7cb7108937ab6e8e0111429c780925f41b678fc6dabe3e13fdf64c84183f69" namespace=k8s.io protocol=ttrpc version=3 May 14 00:09:43.864756 containerd[1513]: time="2025-05-14T00:09:43.864714543Z" level=info msg="connecting to shim 69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c" address="unix:///run/containerd/s/5dec558a891ae53ad23b83178ed32c6bf4da1348972a9649b60482e4947216ff" namespace=k8s.io protocol=ttrpc version=3 May 14 00:09:43.870367 containerd[1513]: time="2025-05-14T00:09:43.870323446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wc2x2,Uid:d15f95ec-8ce1-4674-8592-bdaecca7c346,Namespace:kube-system,Attempt:0,} returns sandbox id \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\"" May 14 00:09:43.874631 containerd[1513]: time="2025-05-14T00:09:43.874537362Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:09:43.880311 systemd[1]: Started cri-containerd-3703925220b58aca4e1830bc2b9964060f54c6b2e5113924d3e983dbf2a7130f.scope - libcontainer container 3703925220b58aca4e1830bc2b9964060f54c6b2e5113924d3e983dbf2a7130f. May 14 00:09:43.905401 systemd[1]: Started cri-containerd-69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c.scope - libcontainer container 69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c. May 14 00:09:43.922909 containerd[1513]: time="2025-05-14T00:09:43.922847033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5tvw,Uid:b450014c-dea3-4239-8126-6384b4c2ff28,Namespace:kube-system,Attempt:0,} returns sandbox id \"3703925220b58aca4e1830bc2b9964060f54c6b2e5113924d3e983dbf2a7130f\"" May 14 00:09:43.932959 containerd[1513]: time="2025-05-14T00:09:43.932421180Z" level=info msg="CreateContainer within sandbox \"3703925220b58aca4e1830bc2b9964060f54c6b2e5113924d3e983dbf2a7130f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:09:43.955681 containerd[1513]: time="2025-05-14T00:09:43.955641227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5m89,Uid:c56b5a78-42f9-4058-9b4b-3aab2a24d615,Namespace:kube-system,Attempt:0,} returns sandbox id \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\"" May 14 00:09:43.960088 containerd[1513]: time="2025-05-14T00:09:43.959413283Z" level=info msg="Container 43666838b9574e655a3b6503bfaee289353e258ffcc2a35b3f3352720b808d50: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:43.969350 containerd[1513]: time="2025-05-14T00:09:43.969307344Z" level=info msg="CreateContainer within sandbox \"3703925220b58aca4e1830bc2b9964060f54c6b2e5113924d3e983dbf2a7130f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43666838b9574e655a3b6503bfaee289353e258ffcc2a35b3f3352720b808d50\"" May 14 00:09:43.972296 containerd[1513]: time="2025-05-14T00:09:43.971824052Z" level=info msg="StartContainer for \"43666838b9574e655a3b6503bfaee289353e258ffcc2a35b3f3352720b808d50\"" May 14 00:09:43.974038 containerd[1513]: time="2025-05-14T00:09:43.973679854Z" level=info msg="connecting to shim 43666838b9574e655a3b6503bfaee289353e258ffcc2a35b3f3352720b808d50" address="unix:///run/containerd/s/df7cb7108937ab6e8e0111429c780925f41b678fc6dabe3e13fdf64c84183f69" protocol=ttrpc version=3 May 14 00:09:43.995422 systemd[1]: Started cri-containerd-43666838b9574e655a3b6503bfaee289353e258ffcc2a35b3f3352720b808d50.scope - libcontainer container 43666838b9574e655a3b6503bfaee289353e258ffcc2a35b3f3352720b808d50. May 14 00:09:44.040168 containerd[1513]: time="2025-05-14T00:09:44.040110788Z" level=info msg="StartContainer for \"43666838b9574e655a3b6503bfaee289353e258ffcc2a35b3f3352720b808d50\" returns successfully" May 14 00:09:45.069485 kubelet[3148]: I0514 00:09:45.068879 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k5tvw" podStartSLOduration=3.06885142 podStartE2EDuration="3.06885142s" podCreationTimestamp="2025-05-14 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:09:45.06854988 +0000 UTC m=+17.248863081" watchObservedRunningTime="2025-05-14 00:09:45.06885142 +0000 UTC m=+17.249164621" May 14 00:09:45.430706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459367387.mount: Deactivated successfully. May 14 00:09:45.906746 containerd[1513]: time="2025-05-14T00:09:45.906596104Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:45.907921 containerd[1513]: time="2025-05-14T00:09:45.907873776Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 00:09:45.909050 containerd[1513]: time="2025-05-14T00:09:45.908947649Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:45.910068 containerd[1513]: time="2025-05-14T00:09:45.910017616Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.035434038s" May 14 00:09:45.910113 containerd[1513]: time="2025-05-14T00:09:45.910072678Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 00:09:45.911479 containerd[1513]: time="2025-05-14T00:09:45.911451397Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 00:09:45.913347 containerd[1513]: time="2025-05-14T00:09:45.913288207Z" level=info msg="CreateContainer within sandbox \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:09:45.930467 containerd[1513]: time="2025-05-14T00:09:45.927312822Z" level=info msg="Container 72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:45.938754 containerd[1513]: time="2025-05-14T00:09:45.938689031Z" level=info msg="CreateContainer within sandbox \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\"" May 14 00:09:45.940232 containerd[1513]: time="2025-05-14T00:09:45.940199055Z" level=info msg="StartContainer for \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\"" May 14 00:09:45.941078 containerd[1513]: time="2025-05-14T00:09:45.941009559Z" level=info msg="connecting to shim 72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b" address="unix:///run/containerd/s/722d208bb8e02ded334ea0b84b7afd5b01134279317e9e322b6b705757ee136c" protocol=ttrpc version=3 May 14 00:09:45.969303 systemd[1]: Started cri-containerd-72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b.scope - libcontainer container 72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b. May 14 00:09:46.011411 containerd[1513]: time="2025-05-14T00:09:46.011356509Z" level=info msg="StartContainer for \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" returns successfully" May 14 00:09:46.070950 kubelet[3148]: I0514 00:09:46.070521 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-wc2x2" podStartSLOduration=1.032741258 podStartE2EDuration="3.070494511s" podCreationTimestamp="2025-05-14 00:09:43 +0000 UTC" firstStartedPulling="2025-05-14 00:09:43.873502252 +0000 UTC m=+16.053815422" lastFinishedPulling="2025-05-14 00:09:45.911255504 +0000 UTC m=+18.091568675" observedRunningTime="2025-05-14 00:09:46.068994665 +0000 UTC m=+18.249307856" watchObservedRunningTime="2025-05-14 00:09:46.070494511 +0000 UTC m=+18.250807702" May 14 00:09:50.073201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482676411.mount: Deactivated successfully. May 14 00:09:51.689702 containerd[1513]: time="2025-05-14T00:09:51.689634491Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:51.690699 containerd[1513]: time="2025-05-14T00:09:51.690482519Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 00:09:51.693681 containerd[1513]: time="2025-05-14T00:09:51.693361517Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:09:51.695037 containerd[1513]: time="2025-05-14T00:09:51.694991100Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.783508955s" May 14 00:09:51.695599 containerd[1513]: time="2025-05-14T00:09:51.695578503Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 00:09:51.697553 containerd[1513]: time="2025-05-14T00:09:51.697499349Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:09:51.741102 containerd[1513]: time="2025-05-14T00:09:51.740381506Z" level=info msg="Container 04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:51.742702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881398862.mount: Deactivated successfully. May 14 00:09:51.756727 containerd[1513]: time="2025-05-14T00:09:51.756599990Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\"" May 14 00:09:51.757396 containerd[1513]: time="2025-05-14T00:09:51.757358842Z" level=info msg="StartContainer for \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\"" May 14 00:09:51.759174 containerd[1513]: time="2025-05-14T00:09:51.759010777Z" level=info msg="connecting to shim 04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59" address="unix:///run/containerd/s/5dec558a891ae53ad23b83178ed32c6bf4da1348972a9649b60482e4947216ff" protocol=ttrpc version=3 May 14 00:09:51.907312 systemd[1]: Started cri-containerd-04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59.scope - libcontainer container 04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59. May 14 00:09:51.982736 containerd[1513]: time="2025-05-14T00:09:51.982533495Z" level=info msg="StartContainer for \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\" returns successfully" May 14 00:09:52.003874 systemd[1]: cri-containerd-04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59.scope: Deactivated successfully. May 14 00:09:52.122577 containerd[1513]: time="2025-05-14T00:09:52.122423465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\" id:\"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\" pid:3605 exited_at:{seconds:1747181392 nanos:52250073}" May 14 00:09:52.130143 containerd[1513]: time="2025-05-14T00:09:52.129240236Z" level=info msg="received exit event container_id:\"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\" id:\"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\" pid:3605 exited_at:{seconds:1747181392 nanos:52250073}" May 14 00:09:52.732206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59-rootfs.mount: Deactivated successfully. May 14 00:09:53.124667 containerd[1513]: time="2025-05-14T00:09:53.123862705Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:09:53.189193 containerd[1513]: time="2025-05-14T00:09:53.187212819Z" level=info msg="Container dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:53.203095 containerd[1513]: time="2025-05-14T00:09:53.202832953Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\"" May 14 00:09:53.205264 containerd[1513]: time="2025-05-14T00:09:53.204106024Z" level=info msg="StartContainer for \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\"" May 14 00:09:53.206224 containerd[1513]: time="2025-05-14T00:09:53.206168645Z" level=info msg="connecting to shim dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b" address="unix:///run/containerd/s/5dec558a891ae53ad23b83178ed32c6bf4da1348972a9649b60482e4947216ff" protocol=ttrpc version=3 May 14 00:09:53.241337 systemd[1]: Started cri-containerd-dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b.scope - libcontainer container dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b. May 14 00:09:53.296107 containerd[1513]: time="2025-05-14T00:09:53.295924994Z" level=info msg="StartContainer for \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\" returns successfully" May 14 00:09:53.313896 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:09:53.314382 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:09:53.315164 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 00:09:53.318399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:09:53.324243 containerd[1513]: time="2025-05-14T00:09:53.323579171Z" level=info msg="received exit event container_id:\"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\" id:\"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\" pid:3650 exited_at:{seconds:1747181393 nanos:322926696}" May 14 00:09:53.324243 containerd[1513]: time="2025-05-14T00:09:53.323929293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\" id:\"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\" pid:3650 exited_at:{seconds:1747181393 nanos:322926696}" May 14 00:09:53.324984 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:09:53.327036 systemd[1]: cri-containerd-dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b.scope: Deactivated successfully. May 14 00:09:53.366180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:09:53.732694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b-rootfs.mount: Deactivated successfully. May 14 00:09:54.127757 containerd[1513]: time="2025-05-14T00:09:54.127544464Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:09:54.158063 containerd[1513]: time="2025-05-14T00:09:54.155865655Z" level=info msg="Container daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:54.176453 containerd[1513]: time="2025-05-14T00:09:54.176375691Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\"" May 14 00:09:54.178154 containerd[1513]: time="2025-05-14T00:09:54.178091178Z" level=info msg="StartContainer for \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\"" May 14 00:09:54.181961 containerd[1513]: time="2025-05-14T00:09:54.181906377Z" level=info msg="connecting to shim daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4" address="unix:///run/containerd/s/5dec558a891ae53ad23b83178ed32c6bf4da1348972a9649b60482e4947216ff" protocol=ttrpc version=3 May 14 00:09:54.214335 systemd[1]: Started cri-containerd-daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4.scope - libcontainer container daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4. May 14 00:09:54.282470 containerd[1513]: time="2025-05-14T00:09:54.282312602Z" level=info msg="StartContainer for \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\" returns successfully" May 14 00:09:54.286357 systemd[1]: cri-containerd-daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4.scope: Deactivated successfully. May 14 00:09:54.289986 containerd[1513]: time="2025-05-14T00:09:54.289547774Z" level=info msg="received exit event container_id:\"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\" id:\"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\" pid:3696 exited_at:{seconds:1747181394 nanos:287819683}" May 14 00:09:54.297945 containerd[1513]: time="2025-05-14T00:09:54.297844433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\" id:\"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\" pid:3696 exited_at:{seconds:1747181394 nanos:287819683}" May 14 00:09:54.325374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4-rootfs.mount: Deactivated successfully. May 14 00:09:55.133232 containerd[1513]: time="2025-05-14T00:09:55.131862271Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:09:55.148388 containerd[1513]: time="2025-05-14T00:09:55.148310340Z" level=info msg="Container b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:55.163972 containerd[1513]: time="2025-05-14T00:09:55.163912263Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\"" May 14 00:09:55.164897 containerd[1513]: time="2025-05-14T00:09:55.164858976Z" level=info msg="StartContainer for \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\"" May 14 00:09:55.167002 containerd[1513]: time="2025-05-14T00:09:55.166946628Z" level=info msg="connecting to shim b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759" address="unix:///run/containerd/s/5dec558a891ae53ad23b83178ed32c6bf4da1348972a9649b60482e4947216ff" protocol=ttrpc version=3 May 14 00:09:55.203368 systemd[1]: Started cri-containerd-b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759.scope - libcontainer container b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759. May 14 00:09:55.238577 systemd[1]: cri-containerd-b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759.scope: Deactivated successfully. May 14 00:09:55.242078 containerd[1513]: time="2025-05-14T00:09:55.242012030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\" id:\"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\" pid:3737 exited_at:{seconds:1747181395 nanos:239109269}" May 14 00:09:55.242371 containerd[1513]: time="2025-05-14T00:09:55.242337576Z" level=info msg="received exit event container_id:\"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\" id:\"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\" pid:3737 exited_at:{seconds:1747181395 nanos:239109269}" May 14 00:09:55.253127 containerd[1513]: time="2025-05-14T00:09:55.253019167Z" level=info msg="StartContainer for \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\" returns successfully" May 14 00:09:55.269945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759-rootfs.mount: Deactivated successfully. May 14 00:09:56.143933 containerd[1513]: time="2025-05-14T00:09:56.143184793Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:09:56.185386 containerd[1513]: time="2025-05-14T00:09:56.185302881Z" level=info msg="Container a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee: CDI devices from CRI Config.CDIDevices: []" May 14 00:09:56.206215 containerd[1513]: time="2025-05-14T00:09:56.206089522Z" level=info msg="CreateContainer within sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\"" May 14 00:09:56.207971 containerd[1513]: time="2025-05-14T00:09:56.207913082Z" level=info msg="StartContainer for \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\"" May 14 00:09:56.211146 containerd[1513]: time="2025-05-14T00:09:56.211015206Z" level=info msg="connecting to shim a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee" address="unix:///run/containerd/s/5dec558a891ae53ad23b83178ed32c6bf4da1348972a9649b60482e4947216ff" protocol=ttrpc version=3 May 14 00:09:56.256374 systemd[1]: Started cri-containerd-a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee.scope - libcontainer container a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee. May 14 00:09:56.302107 containerd[1513]: time="2025-05-14T00:09:56.302053204Z" level=info msg="StartContainer for \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" returns successfully" May 14 00:09:56.413398 containerd[1513]: time="2025-05-14T00:09:56.412553263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" id:\"a24fb26be75f8ae89c1bc6005e7f83d576b8edca578bf93194fb2ea1497797f6\" pid:3804 exited_at:{seconds:1747181396 nanos:411058936}" May 14 00:09:56.459518 kubelet[3148]: I0514 00:09:56.458881 3148 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:09:56.495481 kubelet[3148]: I0514 00:09:56.495198 3148 topology_manager.go:215] "Topology Admit Handler" podUID="aa7b2127-b534-455b-a1a4-11ce73c80227" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5gfdt" May 14 00:09:56.504508 kubelet[3148]: I0514 00:09:56.502909 3148 topology_manager.go:215] "Topology Admit Handler" podUID="01da6286-fbe8-4ae9-bb92-38c3e513c31e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ksmx4" May 14 00:09:56.503422 systemd[1]: Created slice kubepods-burstable-podaa7b2127_b534_455b_a1a4_11ce73c80227.slice - libcontainer container kubepods-burstable-podaa7b2127_b534_455b_a1a4_11ce73c80227.slice. May 14 00:09:56.516390 systemd[1]: Created slice kubepods-burstable-pod01da6286_fbe8_4ae9_bb92_38c3e513c31e.slice - libcontainer container kubepods-burstable-pod01da6286_fbe8_4ae9_bb92_38c3e513c31e.slice. May 14 00:09:56.563001 kubelet[3148]: I0514 00:09:56.562960 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwzn5\" (UniqueName: \"kubernetes.io/projected/01da6286-fbe8-4ae9-bb92-38c3e513c31e-kube-api-access-cwzn5\") pod \"coredns-7db6d8ff4d-ksmx4\" (UID: \"01da6286-fbe8-4ae9-bb92-38c3e513c31e\") " pod="kube-system/coredns-7db6d8ff4d-ksmx4" May 14 00:09:56.563302 kubelet[3148]: I0514 00:09:56.563151 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa7b2127-b534-455b-a1a4-11ce73c80227-config-volume\") pod \"coredns-7db6d8ff4d-5gfdt\" (UID: \"aa7b2127-b534-455b-a1a4-11ce73c80227\") " pod="kube-system/coredns-7db6d8ff4d-5gfdt" May 14 00:09:56.563302 kubelet[3148]: I0514 00:09:56.563171 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbxbs\" (UniqueName: \"kubernetes.io/projected/aa7b2127-b534-455b-a1a4-11ce73c80227-kube-api-access-sbxbs\") pod \"coredns-7db6d8ff4d-5gfdt\" (UID: \"aa7b2127-b534-455b-a1a4-11ce73c80227\") " pod="kube-system/coredns-7db6d8ff4d-5gfdt" May 14 00:09:56.563302 kubelet[3148]: I0514 00:09:56.563187 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01da6286-fbe8-4ae9-bb92-38c3e513c31e-config-volume\") pod \"coredns-7db6d8ff4d-ksmx4\" (UID: \"01da6286-fbe8-4ae9-bb92-38c3e513c31e\") " pod="kube-system/coredns-7db6d8ff4d-ksmx4" May 14 00:09:56.811760 containerd[1513]: time="2025-05-14T00:09:56.811593201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5gfdt,Uid:aa7b2127-b534-455b-a1a4-11ce73c80227,Namespace:kube-system,Attempt:0,}" May 14 00:09:56.822978 containerd[1513]: time="2025-05-14T00:09:56.822404590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ksmx4,Uid:01da6286-fbe8-4ae9-bb92-38c3e513c31e,Namespace:kube-system,Attempt:0,}" May 14 00:09:58.379261 systemd-networkd[1405]: cilium_host: Link UP May 14 00:09:58.379409 systemd-networkd[1405]: cilium_net: Link UP May 14 00:09:58.379582 systemd-networkd[1405]: cilium_net: Gained carrier May 14 00:09:58.379728 systemd-networkd[1405]: cilium_host: Gained carrier May 14 00:09:58.516098 systemd-networkd[1405]: cilium_vxlan: Link UP May 14 00:09:58.516108 systemd-networkd[1405]: cilium_vxlan: Gained carrier May 14 00:09:58.680758 systemd-networkd[1405]: cilium_host: Gained IPv6LL May 14 00:09:58.951460 kernel: NET: Registered PF_ALG protocol family May 14 00:09:59.305263 systemd-networkd[1405]: cilium_net: Gained IPv6LL May 14 00:09:59.788915 systemd-networkd[1405]: lxc_health: Link UP May 14 00:09:59.789232 systemd-networkd[1405]: lxc_health: Gained carrier May 14 00:09:59.856011 kubelet[3148]: I0514 00:09:59.855904 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x5m89" podStartSLOduration=10.116816732 podStartE2EDuration="17.8558852s" podCreationTimestamp="2025-05-14 00:09:42 +0000 UTC" firstStartedPulling="2025-05-14 00:09:43.956919859 +0000 UTC m=+16.137233030" lastFinishedPulling="2025-05-14 00:09:51.695988326 +0000 UTC m=+23.876301498" observedRunningTime="2025-05-14 00:09:57.17457528 +0000 UTC m=+29.354888491" watchObservedRunningTime="2025-05-14 00:09:59.8558852 +0000 UTC m=+32.036198372" May 14 00:10:00.382801 systemd-networkd[1405]: lxcafac0f1cb256: Link UP May 14 00:10:00.384375 kernel: eth0: renamed from tmp3df0a May 14 00:10:00.394646 systemd-networkd[1405]: lxcafac0f1cb256: Gained carrier May 14 00:10:00.403670 systemd-networkd[1405]: lxcab202a812135: Link UP May 14 00:10:00.409134 kernel: eth0: renamed from tmp456e5 May 14 00:10:00.417204 systemd-networkd[1405]: lxcab202a812135: Gained carrier May 14 00:10:00.456226 systemd-networkd[1405]: cilium_vxlan: Gained IPv6LL May 14 00:10:00.840178 systemd-networkd[1405]: lxc_health: Gained IPv6LL May 14 00:10:01.482885 systemd-networkd[1405]: lxcafac0f1cb256: Gained IPv6LL May 14 00:10:02.248365 systemd-networkd[1405]: lxcab202a812135: Gained IPv6LL May 14 00:10:03.694317 containerd[1513]: time="2025-05-14T00:10:03.694252667Z" level=info msg="connecting to shim 3df0ad6d3630939720689d4a9527afc14a48e295b36cc1138f9c1c7b9feddc26" address="unix:///run/containerd/s/45be2e163fab7098e59c8d6fb47f164dfdf3c61243da0162b806535d80c4a326" namespace=k8s.io protocol=ttrpc version=3 May 14 00:10:03.724142 systemd[1]: Started cri-containerd-3df0ad6d3630939720689d4a9527afc14a48e295b36cc1138f9c1c7b9feddc26.scope - libcontainer container 3df0ad6d3630939720689d4a9527afc14a48e295b36cc1138f9c1c7b9feddc26. May 14 00:10:03.770005 containerd[1513]: time="2025-05-14T00:10:03.769915863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5gfdt,Uid:aa7b2127-b534-455b-a1a4-11ce73c80227,Namespace:kube-system,Attempt:0,} returns sandbox id \"3df0ad6d3630939720689d4a9527afc14a48e295b36cc1138f9c1c7b9feddc26\"" May 14 00:10:03.775303 containerd[1513]: time="2025-05-14T00:10:03.775258200Z" level=info msg="CreateContainer within sandbox \"3df0ad6d3630939720689d4a9527afc14a48e295b36cc1138f9c1c7b9feddc26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:10:03.790596 containerd[1513]: time="2025-05-14T00:10:03.790551717Z" level=info msg="connecting to shim 456e583e01a8c74a9e39db1f13d9a73414011a04fa4b121c53e793d87d2fe7f3" address="unix:///run/containerd/s/0d00594039301bb4df2e4d593c740f0bc5cf5b9b2b31a3e9793c070ca008151d" namespace=k8s.io protocol=ttrpc version=3 May 14 00:10:03.817677 containerd[1513]: time="2025-05-14T00:10:03.816970621Z" level=info msg="Container 78a1cf7fd7ffd2b0367d0537b9e3cdf6316d08797603428730186eabdf6b0448: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:03.830042 systemd[1]: Started cri-containerd-456e583e01a8c74a9e39db1f13d9a73414011a04fa4b121c53e793d87d2fe7f3.scope - libcontainer container 456e583e01a8c74a9e39db1f13d9a73414011a04fa4b121c53e793d87d2fe7f3. May 14 00:10:03.832519 containerd[1513]: time="2025-05-14T00:10:03.832489160Z" level=info msg="CreateContainer within sandbox \"3df0ad6d3630939720689d4a9527afc14a48e295b36cc1138f9c1c7b9feddc26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78a1cf7fd7ffd2b0367d0537b9e3cdf6316d08797603428730186eabdf6b0448\"" May 14 00:10:03.834056 containerd[1513]: time="2025-05-14T00:10:03.833315144Z" level=info msg="StartContainer for \"78a1cf7fd7ffd2b0367d0537b9e3cdf6316d08797603428730186eabdf6b0448\"" May 14 00:10:03.834056 containerd[1513]: time="2025-05-14T00:10:03.833948508Z" level=info msg="connecting to shim 78a1cf7fd7ffd2b0367d0537b9e3cdf6316d08797603428730186eabdf6b0448" address="unix:///run/containerd/s/45be2e163fab7098e59c8d6fb47f164dfdf3c61243da0162b806535d80c4a326" protocol=ttrpc version=3 May 14 00:10:03.866176 systemd[1]: Started cri-containerd-78a1cf7fd7ffd2b0367d0537b9e3cdf6316d08797603428730186eabdf6b0448.scope - libcontainer container 78a1cf7fd7ffd2b0367d0537b9e3cdf6316d08797603428730186eabdf6b0448. May 14 00:10:03.925689 containerd[1513]: time="2025-05-14T00:10:03.925657738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ksmx4,Uid:01da6286-fbe8-4ae9-bb92-38c3e513c31e,Namespace:kube-system,Attempt:0,} returns sandbox id \"456e583e01a8c74a9e39db1f13d9a73414011a04fa4b121c53e793d87d2fe7f3\"" May 14 00:10:03.929455 containerd[1513]: time="2025-05-14T00:10:03.929439888Z" level=info msg="CreateContainer within sandbox \"456e583e01a8c74a9e39db1f13d9a73414011a04fa4b121c53e793d87d2fe7f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:10:03.937952 containerd[1513]: time="2025-05-14T00:10:03.937934286Z" level=info msg="StartContainer for \"78a1cf7fd7ffd2b0367d0537b9e3cdf6316d08797603428730186eabdf6b0448\" returns successfully" May 14 00:10:03.939434 containerd[1513]: time="2025-05-14T00:10:03.939419271Z" level=info msg="Container d39ecb6c48d18077fe74a6ec95bdda5a28fbbe3d97ebb28f8e4376cc54293576: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:03.945828 containerd[1513]: time="2025-05-14T00:10:03.945701234Z" level=info msg="CreateContainer within sandbox \"456e583e01a8c74a9e39db1f13d9a73414011a04fa4b121c53e793d87d2fe7f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d39ecb6c48d18077fe74a6ec95bdda5a28fbbe3d97ebb28f8e4376cc54293576\"" May 14 00:10:03.946830 containerd[1513]: time="2025-05-14T00:10:03.946816629Z" level=info msg="StartContainer for \"d39ecb6c48d18077fe74a6ec95bdda5a28fbbe3d97ebb28f8e4376cc54293576\"" May 14 00:10:03.947857 containerd[1513]: time="2025-05-14T00:10:03.947771944Z" level=info msg="connecting to shim d39ecb6c48d18077fe74a6ec95bdda5a28fbbe3d97ebb28f8e4376cc54293576" address="unix:///run/containerd/s/0d00594039301bb4df2e4d593c740f0bc5cf5b9b2b31a3e9793c070ca008151d" protocol=ttrpc version=3 May 14 00:10:03.964144 systemd[1]: Started cri-containerd-d39ecb6c48d18077fe74a6ec95bdda5a28fbbe3d97ebb28f8e4376cc54293576.scope - libcontainer container d39ecb6c48d18077fe74a6ec95bdda5a28fbbe3d97ebb28f8e4376cc54293576. May 14 00:10:03.999923 containerd[1513]: time="2025-05-14T00:10:03.999826568Z" level=info msg="StartContainer for \"d39ecb6c48d18077fe74a6ec95bdda5a28fbbe3d97ebb28f8e4376cc54293576\" returns successfully" May 14 00:10:04.197138 kubelet[3148]: I0514 00:10:04.196832 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ksmx4" podStartSLOduration=21.196770411 podStartE2EDuration="21.196770411s" podCreationTimestamp="2025-05-14 00:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:10:04.193831106 +0000 UTC m=+36.374144296" watchObservedRunningTime="2025-05-14 00:10:04.196770411 +0000 UTC m=+36.377083592" May 14 00:10:04.216101 kubelet[3148]: I0514 00:10:04.215734 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5gfdt" podStartSLOduration=21.215705568 podStartE2EDuration="21.215705568s" podCreationTimestamp="2025-05-14 00:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:10:04.211785808 +0000 UTC m=+36.392099000" watchObservedRunningTime="2025-05-14 00:10:04.215705568 +0000 UTC m=+36.396018758" May 14 00:10:04.653133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362626578.mount: Deactivated successfully. May 14 00:12:50.250510 update_engine[1499]: I20250514 00:12:50.250393 1499 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 00:12:50.250510 update_engine[1499]: I20250514 00:12:50.250485 1499 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 00:12:50.253949 update_engine[1499]: I20250514 00:12:50.253895 1499 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 00:12:50.254807 update_engine[1499]: I20250514 00:12:50.254751 1499 omaha_request_params.cc:62] Current group set to alpha May 14 00:12:50.255347 update_engine[1499]: I20250514 00:12:50.254930 1499 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 00:12:50.255347 update_engine[1499]: I20250514 00:12:50.254942 1499 update_attempter.cc:643] Scheduling an action processor start. May 14 00:12:50.255347 update_engine[1499]: I20250514 00:12:50.254967 1499 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 00:12:50.255347 update_engine[1499]: I20250514 00:12:50.255015 1499 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 00:12:50.255347 update_engine[1499]: I20250514 00:12:50.255124 1499 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 00:12:50.255347 update_engine[1499]: I20250514 00:12:50.255137 1499 omaha_request_action.cc:272] Request: May 14 00:12:50.255347 update_engine[1499]: May 14 00:12:50.255347 update_engine[1499]: May 14 00:12:50.255347 update_engine[1499]: May 14 00:12:50.255347 update_engine[1499]: May 14 00:12:50.255347 update_engine[1499]: May 14 00:12:50.255347 update_engine[1499]: May 14 00:12:50.255347 update_engine[1499]: May 14 00:12:50.255347 update_engine[1499]: May 14 00:12:50.255347 update_engine[1499]: I20250514 00:12:50.255147 1499 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:12:50.281109 update_engine[1499]: I20250514 00:12:50.280441 1499 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:12:50.281109 update_engine[1499]: I20250514 00:12:50.281083 1499 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:12:50.282608 locksmithd[1528]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 00:12:50.283356 update_engine[1499]: E20250514 00:12:50.283278 1499 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:12:50.283436 update_engine[1499]: I20250514 00:12:50.283396 1499 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 00:13:00.130859 update_engine[1499]: I20250514 00:13:00.130743 1499 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:13:00.131372 update_engine[1499]: I20250514 00:13:00.131099 1499 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:13:00.131416 update_engine[1499]: I20250514 00:13:00.131398 1499 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:13:00.132843 update_engine[1499]: E20250514 00:13:00.132786 1499 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:13:00.132998 update_engine[1499]: I20250514 00:13:00.132882 1499 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 00:13:02.704350 systemd[1]: Started sshd@8-95.217.191.100:22-64.62.156.181:61005.service - OpenSSH per-connection server daemon (64.62.156.181:61005). May 14 00:13:03.399922 sshd[4465]: Invalid user from 64.62.156.181 port 61005 May 14 00:13:06.685432 sshd[4465]: Connection closed by invalid user 64.62.156.181 port 61005 [preauth] May 14 00:13:06.688084 systemd[1]: sshd@8-95.217.191.100:22-64.62.156.181:61005.service: Deactivated successfully. May 14 00:13:10.130782 update_engine[1499]: I20250514 00:13:10.130697 1499 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:13:10.131190 update_engine[1499]: I20250514 00:13:10.130963 1499 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:13:10.131269 update_engine[1499]: I20250514 00:13:10.131237 1499 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:13:10.132162 update_engine[1499]: E20250514 00:13:10.132133 1499 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:13:10.132203 update_engine[1499]: I20250514 00:13:10.132184 1499 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 00:13:20.124112 update_engine[1499]: I20250514 00:13:20.123582 1499 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:13:20.126403 update_engine[1499]: I20250514 00:13:20.124760 1499 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:13:20.126403 update_engine[1499]: I20250514 00:13:20.126152 1499 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:13:20.127461 update_engine[1499]: E20250514 00:13:20.127321 1499 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:13:20.127461 update_engine[1499]: I20250514 00:13:20.127394 1499 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 00:13:20.127461 update_engine[1499]: I20250514 00:13:20.127404 1499 omaha_request_action.cc:617] Omaha request response: May 14 00:13:20.127673 update_engine[1499]: E20250514 00:13:20.127486 1499 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127507 1499 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127513 1499 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127518 1499 update_attempter.cc:306] Processing Done. May 14 00:13:20.127673 update_engine[1499]: E20250514 00:13:20.127533 1499 update_attempter.cc:619] Update failed. May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127540 1499 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127545 1499 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127552 1499 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127622 1499 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127645 1499 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127650 1499 omaha_request_action.cc:272] Request: May 14 00:13:20.127673 update_engine[1499]: May 14 00:13:20.127673 update_engine[1499]: May 14 00:13:20.127673 update_engine[1499]: May 14 00:13:20.127673 update_engine[1499]: May 14 00:13:20.127673 update_engine[1499]: May 14 00:13:20.127673 update_engine[1499]: May 14 00:13:20.127673 update_engine[1499]: I20250514 00:13:20.127657 1499 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:13:20.128151 update_engine[1499]: I20250514 00:13:20.127808 1499 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:13:20.128151 update_engine[1499]: I20250514 00:13:20.127974 1499 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:13:20.128708 locksmithd[1528]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 00:13:20.129597 update_engine[1499]: E20250514 00:13:20.128559 1499 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:13:20.129597 update_engine[1499]: I20250514 00:13:20.128600 1499 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 00:13:20.129597 update_engine[1499]: I20250514 00:13:20.128608 1499 omaha_request_action.cc:617] Omaha request response: May 14 00:13:20.129597 update_engine[1499]: I20250514 00:13:20.128614 1499 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:13:20.129597 update_engine[1499]: I20250514 00:13:20.128620 1499 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:13:20.129597 update_engine[1499]: I20250514 00:13:20.128626 1499 update_attempter.cc:306] Processing Done. May 14 00:13:20.129597 update_engine[1499]: I20250514 00:13:20.128633 1499 update_attempter.cc:310] Error event sent. May 14 00:13:20.129597 update_engine[1499]: I20250514 00:13:20.128640 1499 update_check_scheduler.cc:74] Next update check in 42m12s May 14 00:13:20.129809 locksmithd[1528]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 00:14:04.710629 systemd[1]: Started sshd@9-95.217.191.100:22-139.178.89.65:56748.service - OpenSSH per-connection server daemon (139.178.89.65:56748). May 14 00:14:05.699161 sshd[4478]: Accepted publickey for core from 139.178.89.65 port 56748 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:05.702314 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:05.712162 systemd-logind[1497]: New session 8 of user core. May 14 00:14:05.717350 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:14:07.040981 sshd[4480]: Connection closed by 139.178.89.65 port 56748 May 14 00:14:07.041801 sshd-session[4478]: pam_unix(sshd:session): session closed for user core May 14 00:14:07.049345 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. May 14 00:14:07.050169 systemd[1]: sshd@9-95.217.191.100:22-139.178.89.65:56748.service: Deactivated successfully. May 14 00:14:07.052979 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:14:07.054702 systemd-logind[1497]: Removed session 8. May 14 00:14:12.213860 systemd[1]: Started sshd@10-95.217.191.100:22-139.178.89.65:50476.service - OpenSSH per-connection server daemon (139.178.89.65:50476). May 14 00:14:13.229859 sshd[4496]: Accepted publickey for core from 139.178.89.65 port 50476 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:13.231477 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:13.237473 systemd-logind[1497]: New session 9 of user core. May 14 00:14:13.245348 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:14:14.047557 sshd[4498]: Connection closed by 139.178.89.65 port 50476 May 14 00:14:14.049058 sshd-session[4496]: pam_unix(sshd:session): session closed for user core May 14 00:14:14.058594 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. May 14 00:14:14.059862 systemd[1]: sshd@10-95.217.191.100:22-139.178.89.65:50476.service: Deactivated successfully. May 14 00:14:14.063465 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:14:14.065576 systemd-logind[1497]: Removed session 9. May 14 00:14:19.223116 systemd[1]: Started sshd@11-95.217.191.100:22-139.178.89.65:50768.service - OpenSSH per-connection server daemon (139.178.89.65:50768). May 14 00:14:20.252817 sshd[4513]: Accepted publickey for core from 139.178.89.65 port 50768 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:20.254594 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:20.264140 systemd-logind[1497]: New session 10 of user core. May 14 00:14:20.268199 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:14:21.061877 sshd[4515]: Connection closed by 139.178.89.65 port 50768 May 14 00:14:21.062843 sshd-session[4513]: pam_unix(sshd:session): session closed for user core May 14 00:14:21.068348 systemd[1]: sshd@11-95.217.191.100:22-139.178.89.65:50768.service: Deactivated successfully. May 14 00:14:21.071078 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:14:21.072540 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. May 14 00:14:21.074104 systemd-logind[1497]: Removed session 10. May 14 00:14:21.241139 systemd[1]: Started sshd@12-95.217.191.100:22-139.178.89.65:50784.service - OpenSSH per-connection server daemon (139.178.89.65:50784). May 14 00:14:22.238010 sshd[4528]: Accepted publickey for core from 139.178.89.65 port 50784 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:22.240451 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:22.252906 systemd-logind[1497]: New session 11 of user core. May 14 00:14:22.261936 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:14:22.545327 containerd[1513]: time="2025-05-14T00:14:22.543719249Z" level=warning msg="container event discarded" container=f9a0e9ba6a3f2d476af4922b8c8bf9404339b2c43df6cc4432a4e8f9850bb8ad type=CONTAINER_CREATED_EVENT May 14 00:14:22.557553 containerd[1513]: time="2025-05-14T00:14:22.557456734Z" level=warning msg="container event discarded" container=f9a0e9ba6a3f2d476af4922b8c8bf9404339b2c43df6cc4432a4e8f9850bb8ad type=CONTAINER_STARTED_EVENT May 14 00:14:22.582991 containerd[1513]: time="2025-05-14T00:14:22.582851202Z" level=warning msg="container event discarded" container=2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289 type=CONTAINER_CREATED_EVENT May 14 00:14:22.582991 containerd[1513]: time="2025-05-14T00:14:22.582977577Z" level=warning msg="container event discarded" container=2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289 type=CONTAINER_STARTED_EVENT May 14 00:14:22.597451 containerd[1513]: time="2025-05-14T00:14:22.597332609Z" level=warning msg="container event discarded" container=ad9800ca077204fb24b5ed3c33039e327bc41c575ec0a36d27830fce576bd2a4 type=CONTAINER_CREATED_EVENT May 14 00:14:22.597451 containerd[1513]: time="2025-05-14T00:14:22.597431012Z" level=warning msg="container event discarded" container=ad9800ca077204fb24b5ed3c33039e327bc41c575ec0a36d27830fce576bd2a4 type=CONTAINER_STARTED_EVENT May 14 00:14:22.597451 containerd[1513]: time="2025-05-14T00:14:22.597452592Z" level=warning msg="container event discarded" container=1e597936f54823210e782f2bfc97eb352d930c6342ac18fafd5651c79006876e type=CONTAINER_CREATED_EVENT May 14 00:14:22.597451 containerd[1513]: time="2025-05-14T00:14:22.597473762Z" level=warning msg="container event discarded" container=6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7 type=CONTAINER_CREATED_EVENT May 14 00:14:22.637966 containerd[1513]: time="2025-05-14T00:14:22.637837332Z" level=warning msg="container event discarded" container=bbb26237b7adc9d210c1a77fc1ca3d3df36e3e3e9073b44a2382acbd024c47f2 type=CONTAINER_CREATED_EVENT May 14 00:14:22.693291 containerd[1513]: time="2025-05-14T00:14:22.693180445Z" level=warning msg="container event discarded" container=6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7 type=CONTAINER_STARTED_EVENT May 14 00:14:22.739723 containerd[1513]: time="2025-05-14T00:14:22.739607203Z" level=warning msg="container event discarded" container=bbb26237b7adc9d210c1a77fc1ca3d3df36e3e3e9073b44a2382acbd024c47f2 type=CONTAINER_STARTED_EVENT May 14 00:14:22.763214 containerd[1513]: time="2025-05-14T00:14:22.763118335Z" level=warning msg="container event discarded" container=1e597936f54823210e782f2bfc97eb352d930c6342ac18fafd5651c79006876e type=CONTAINER_STARTED_EVENT May 14 00:14:23.069950 sshd[4530]: Connection closed by 139.178.89.65 port 50784 May 14 00:14:23.070791 sshd-session[4528]: pam_unix(sshd:session): session closed for user core May 14 00:14:23.074851 systemd[1]: sshd@12-95.217.191.100:22-139.178.89.65:50784.service: Deactivated successfully. May 14 00:14:23.077173 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:14:23.078711 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. May 14 00:14:23.080513 systemd-logind[1497]: Removed session 11. May 14 00:14:23.245941 systemd[1]: Started sshd@13-95.217.191.100:22-139.178.89.65:50786.service - OpenSSH per-connection server daemon (139.178.89.65:50786). May 14 00:14:24.254094 sshd[4541]: Accepted publickey for core from 139.178.89.65 port 50786 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:24.255847 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:24.264174 systemd-logind[1497]: New session 12 of user core. May 14 00:14:24.272686 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:14:25.028479 sshd[4543]: Connection closed by 139.178.89.65 port 50786 May 14 00:14:25.030286 sshd-session[4541]: pam_unix(sshd:session): session closed for user core May 14 00:14:25.034281 systemd[1]: sshd@13-95.217.191.100:22-139.178.89.65:50786.service: Deactivated successfully. May 14 00:14:25.037769 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:14:25.040089 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. May 14 00:14:25.042402 systemd-logind[1497]: Removed session 12. May 14 00:14:30.201950 systemd[1]: Started sshd@14-95.217.191.100:22-139.178.89.65:51580.service - OpenSSH per-connection server daemon (139.178.89.65:51580). May 14 00:14:31.215806 sshd[4557]: Accepted publickey for core from 139.178.89.65 port 51580 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:31.217820 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:31.227046 systemd-logind[1497]: New session 13 of user core. May 14 00:14:31.229342 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:14:31.991350 sshd[4559]: Connection closed by 139.178.89.65 port 51580 May 14 00:14:31.993314 sshd-session[4557]: pam_unix(sshd:session): session closed for user core May 14 00:14:31.997681 systemd[1]: sshd@14-95.217.191.100:22-139.178.89.65:51580.service: Deactivated successfully. May 14 00:14:32.000875 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:14:32.002299 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. May 14 00:14:32.003621 systemd-logind[1497]: Removed session 13. May 14 00:14:32.164511 systemd[1]: Started sshd@15-95.217.191.100:22-139.178.89.65:51584.service - OpenSSH per-connection server daemon (139.178.89.65:51584). May 14 00:14:33.169243 sshd[4571]: Accepted publickey for core from 139.178.89.65 port 51584 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:33.171203 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:33.177891 systemd-logind[1497]: New session 14 of user core. May 14 00:14:33.186380 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:14:34.152527 sshd[4573]: Connection closed by 139.178.89.65 port 51584 May 14 00:14:34.154114 sshd-session[4571]: pam_unix(sshd:session): session closed for user core May 14 00:14:34.162535 systemd[1]: sshd@15-95.217.191.100:22-139.178.89.65:51584.service: Deactivated successfully. May 14 00:14:34.165227 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:14:34.167769 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. May 14 00:14:34.169743 systemd-logind[1497]: Removed session 14. May 14 00:14:34.322142 systemd[1]: Started sshd@16-95.217.191.100:22-139.178.89.65:51594.service - OpenSSH per-connection server daemon (139.178.89.65:51594). May 14 00:14:35.346737 sshd[4582]: Accepted publickey for core from 139.178.89.65 port 51594 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:35.349018 sshd-session[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:35.356880 systemd-logind[1497]: New session 15 of user core. May 14 00:14:35.367359 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:14:38.132564 sshd[4584]: Connection closed by 139.178.89.65 port 51594 May 14 00:14:38.136783 sshd-session[4582]: pam_unix(sshd:session): session closed for user core May 14 00:14:38.145687 systemd[1]: sshd@16-95.217.191.100:22-139.178.89.65:51594.service: Deactivated successfully. May 14 00:14:38.149637 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:14:38.152799 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. May 14 00:14:38.155176 systemd-logind[1497]: Removed session 15. May 14 00:14:38.306728 systemd[1]: Started sshd@17-95.217.191.100:22-139.178.89.65:49106.service - OpenSSH per-connection server daemon (139.178.89.65:49106). May 14 00:14:39.306800 sshd[4602]: Accepted publickey for core from 139.178.89.65 port 49106 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:39.308436 sshd-session[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:39.313464 systemd-logind[1497]: New session 16 of user core. May 14 00:14:39.317267 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:14:40.544933 sshd[4604]: Connection closed by 139.178.89.65 port 49106 May 14 00:14:40.545872 sshd-session[4602]: pam_unix(sshd:session): session closed for user core May 14 00:14:40.550727 systemd[1]: sshd@17-95.217.191.100:22-139.178.89.65:49106.service: Deactivated successfully. May 14 00:14:40.553944 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:14:40.556735 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. May 14 00:14:40.559465 systemd-logind[1497]: Removed session 16. May 14 00:14:40.719600 systemd[1]: Started sshd@18-95.217.191.100:22-139.178.89.65:49120.service - OpenSSH per-connection server daemon (139.178.89.65:49120). May 14 00:14:41.731365 sshd[4615]: Accepted publickey for core from 139.178.89.65 port 49120 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:41.732047 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:41.739801 systemd-logind[1497]: New session 17 of user core. May 14 00:14:41.749330 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:14:42.568385 sshd[4617]: Connection closed by 139.178.89.65 port 49120 May 14 00:14:42.569172 sshd-session[4615]: pam_unix(sshd:session): session closed for user core May 14 00:14:42.573905 systemd[1]: sshd@18-95.217.191.100:22-139.178.89.65:49120.service: Deactivated successfully. May 14 00:14:42.577015 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:14:42.578405 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. May 14 00:14:42.579773 systemd-logind[1497]: Removed session 17. May 14 00:14:43.880926 containerd[1513]: time="2025-05-14T00:14:43.880757916Z" level=warning msg="container event discarded" container=e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8 type=CONTAINER_CREATED_EVENT May 14 00:14:43.880926 containerd[1513]: time="2025-05-14T00:14:43.880913196Z" level=warning msg="container event discarded" container=e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8 type=CONTAINER_STARTED_EVENT May 14 00:14:43.933315 containerd[1513]: time="2025-05-14T00:14:43.933183766Z" level=warning msg="container event discarded" container=3703925220b58aca4e1830bc2b9964060f54c6b2e5113924d3e983dbf2a7130f type=CONTAINER_CREATED_EVENT May 14 00:14:43.933315 containerd[1513]: time="2025-05-14T00:14:43.933294953Z" level=warning msg="container event discarded" container=3703925220b58aca4e1830bc2b9964060f54c6b2e5113924d3e983dbf2a7130f type=CONTAINER_STARTED_EVENT May 14 00:14:43.965937 containerd[1513]: time="2025-05-14T00:14:43.965818099Z" level=warning msg="container event discarded" container=69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c type=CONTAINER_CREATED_EVENT May 14 00:14:43.965937 containerd[1513]: time="2025-05-14T00:14:43.965894632Z" level=warning msg="container event discarded" container=69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c type=CONTAINER_STARTED_EVENT May 14 00:14:43.978173 containerd[1513]: time="2025-05-14T00:14:43.978084593Z" level=warning msg="container event discarded" container=43666838b9574e655a3b6503bfaee289353e258ffcc2a35b3f3352720b808d50 type=CONTAINER_CREATED_EVENT May 14 00:14:44.049332 containerd[1513]: time="2025-05-14T00:14:44.049238730Z" level=warning msg="container event discarded" container=43666838b9574e655a3b6503bfaee289353e258ffcc2a35b3f3352720b808d50 type=CONTAINER_STARTED_EVENT May 14 00:14:45.948329 containerd[1513]: time="2025-05-14T00:14:45.948222100Z" level=warning msg="container event discarded" container=72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b type=CONTAINER_CREATED_EVENT May 14 00:14:46.019477 containerd[1513]: time="2025-05-14T00:14:46.019373433Z" level=warning msg="container event discarded" container=72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b type=CONTAINER_STARTED_EVENT May 14 00:14:47.742343 systemd[1]: Started sshd@19-95.217.191.100:22-139.178.89.65:57320.service - OpenSSH per-connection server daemon (139.178.89.65:57320). May 14 00:14:48.746928 sshd[4635]: Accepted publickey for core from 139.178.89.65 port 57320 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:48.749300 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:48.757489 systemd-logind[1497]: New session 18 of user core. May 14 00:14:48.766406 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:14:49.527167 sshd[4639]: Connection closed by 139.178.89.65 port 57320 May 14 00:14:49.527934 sshd-session[4635]: pam_unix(sshd:session): session closed for user core May 14 00:14:49.532772 systemd[1]: sshd@19-95.217.191.100:22-139.178.89.65:57320.service: Deactivated successfully. May 14 00:14:49.535610 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:14:49.536664 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. May 14 00:14:49.538206 systemd-logind[1497]: Removed session 18. May 14 00:14:51.766140 containerd[1513]: time="2025-05-14T00:14:51.766020019Z" level=warning msg="container event discarded" container=04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59 type=CONTAINER_CREATED_EVENT May 14 00:14:51.993512 containerd[1513]: time="2025-05-14T00:14:51.993305917Z" level=warning msg="container event discarded" container=04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59 type=CONTAINER_STARTED_EVENT May 14 00:14:52.282590 containerd[1513]: time="2025-05-14T00:14:52.282417879Z" level=warning msg="container event discarded" container=04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59 type=CONTAINER_STOPPED_EVENT May 14 00:14:53.212536 containerd[1513]: time="2025-05-14T00:14:53.212427648Z" level=warning msg="container event discarded" container=dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b type=CONTAINER_CREATED_EVENT May 14 00:14:53.304756 containerd[1513]: time="2025-05-14T00:14:53.304678371Z" level=warning msg="container event discarded" container=dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b type=CONTAINER_STARTED_EVENT May 14 00:14:53.396299 containerd[1513]: time="2025-05-14T00:14:53.396209261Z" level=warning msg="container event discarded" container=dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b type=CONTAINER_STOPPED_EVENT May 14 00:14:54.184961 containerd[1513]: time="2025-05-14T00:14:54.184841779Z" level=warning msg="container event discarded" container=daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4 type=CONTAINER_CREATED_EVENT May 14 00:14:54.289308 containerd[1513]: time="2025-05-14T00:14:54.289199281Z" level=warning msg="container event discarded" container=daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4 type=CONTAINER_STARTED_EVENT May 14 00:14:54.343631 containerd[1513]: time="2025-05-14T00:14:54.343540641Z" level=warning msg="container event discarded" container=daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4 type=CONTAINER_STOPPED_EVENT May 14 00:14:54.704105 systemd[1]: Started sshd@20-95.217.191.100:22-139.178.89.65:57330.service - OpenSSH per-connection server daemon (139.178.89.65:57330). May 14 00:14:55.170204 containerd[1513]: time="2025-05-14T00:14:55.170105162Z" level=warning msg="container event discarded" container=b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759 type=CONTAINER_CREATED_EVENT May 14 00:14:55.256888 containerd[1513]: time="2025-05-14T00:14:55.256781239Z" level=warning msg="container event discarded" container=b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759 type=CONTAINER_STARTED_EVENT May 14 00:14:55.290493 containerd[1513]: time="2025-05-14T00:14:55.290379518Z" level=warning msg="container event discarded" container=b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759 type=CONTAINER_STOPPED_EVENT May 14 00:14:55.711382 sshd[4651]: Accepted publickey for core from 139.178.89.65 port 57330 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:55.712338 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:55.722145 systemd-logind[1497]: New session 19 of user core. May 14 00:14:55.727239 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:14:56.215207 containerd[1513]: time="2025-05-14T00:14:56.215084700Z" level=warning msg="container event discarded" container=a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee type=CONTAINER_CREATED_EVENT May 14 00:14:56.310554 containerd[1513]: time="2025-05-14T00:14:56.310454509Z" level=warning msg="container event discarded" container=a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee type=CONTAINER_STARTED_EVENT May 14 00:14:56.512358 sshd[4654]: Connection closed by 139.178.89.65 port 57330 May 14 00:14:56.513679 sshd-session[4651]: pam_unix(sshd:session): session closed for user core May 14 00:14:56.519688 systemd[1]: sshd@20-95.217.191.100:22-139.178.89.65:57330.service: Deactivated successfully. May 14 00:14:56.523418 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:14:56.525741 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. May 14 00:14:56.527975 systemd-logind[1497]: Removed session 19. May 14 00:14:56.686361 systemd[1]: Started sshd@21-95.217.191.100:22-139.178.89.65:57404.service - OpenSSH per-connection server daemon (139.178.89.65:57404). May 14 00:14:57.687301 sshd[4666]: Accepted publickey for core from 139.178.89.65 port 57404 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:14:57.689653 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:14:57.698415 systemd-logind[1497]: New session 20 of user core. May 14 00:14:57.705319 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:14:59.606876 containerd[1513]: time="2025-05-14T00:14:59.606411928Z" level=info msg="StopContainer for \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" with timeout 30 (s)" May 14 00:14:59.611777 containerd[1513]: time="2025-05-14T00:14:59.610561818Z" level=info msg="Stop container \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" with signal terminated" May 14 00:14:59.638341 systemd[1]: cri-containerd-72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b.scope: Deactivated successfully. May 14 00:14:59.641641 systemd[1]: cri-containerd-72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b.scope: Consumed 735ms CPU time, 24.9M memory peak, 1.6M read from disk, 4K written to disk. May 14 00:14:59.644063 containerd[1513]: time="2025-05-14T00:14:59.643909474Z" level=info msg="received exit event container_id:\"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" id:\"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" pid:3544 exited_at:{seconds:1747181699 nanos:643094803}" May 14 00:14:59.646575 containerd[1513]: time="2025-05-14T00:14:59.646003656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" id:\"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" pid:3544 exited_at:{seconds:1747181699 nanos:643094803}" May 14 00:14:59.665044 containerd[1513]: time="2025-05-14T00:14:59.664837507Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:14:59.671713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b-rootfs.mount: Deactivated successfully. May 14 00:14:59.676969 containerd[1513]: time="2025-05-14T00:14:59.676900684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" id:\"87ebd139767ec05d7a9f7fcbb989d338b3627ce05bf11afa35a71f51ffc72e11\" pid:4694 exited_at:{seconds:1747181699 nanos:676494857}" May 14 00:14:59.679310 containerd[1513]: time="2025-05-14T00:14:59.679262505Z" level=info msg="StopContainer for \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" with timeout 2 (s)" May 14 00:14:59.679658 containerd[1513]: time="2025-05-14T00:14:59.679620724Z" level=info msg="Stop container \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" with signal terminated" May 14 00:14:59.689181 systemd-networkd[1405]: lxc_health: Link DOWN May 14 00:14:59.689192 systemd-networkd[1405]: lxc_health: Lost carrier May 14 00:14:59.711207 containerd[1513]: time="2025-05-14T00:14:59.710126513Z" level=info msg="StopContainer for \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" returns successfully" May 14 00:14:59.712524 containerd[1513]: time="2025-05-14T00:14:59.712493424Z" level=info msg="StopPodSandbox for \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\"" May 14 00:14:59.714870 containerd[1513]: time="2025-05-14T00:14:59.714805000Z" level=info msg="Container to stop \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:14:59.716084 systemd[1]: cri-containerd-a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee.scope: Deactivated successfully. May 14 00:14:59.716349 systemd[1]: cri-containerd-a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee.scope: Consumed 8.275s CPU time, 159.1M memory peak, 32.4M read from disk, 13.3M written to disk. May 14 00:14:59.721017 containerd[1513]: time="2025-05-14T00:14:59.720520954Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" id:\"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" pid:3773 exited_at:{seconds:1747181699 nanos:720218159}" May 14 00:14:59.721017 containerd[1513]: time="2025-05-14T00:14:59.720536984Z" level=info msg="received exit event container_id:\"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" id:\"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" pid:3773 exited_at:{seconds:1747181699 nanos:720218159}" May 14 00:14:59.728631 systemd[1]: cri-containerd-e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8.scope: Deactivated successfully. May 14 00:14:59.730817 containerd[1513]: time="2025-05-14T00:14:59.730613371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" id:\"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" pid:3257 exit_status:137 exited_at:{seconds:1747181699 nanos:729837763}" May 14 00:14:59.747345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee-rootfs.mount: Deactivated successfully. May 14 00:14:59.763094 containerd[1513]: time="2025-05-14T00:14:59.762819186Z" level=info msg="StopContainer for \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" returns successfully" May 14 00:14:59.763688 containerd[1513]: time="2025-05-14T00:14:59.763564607Z" level=info msg="StopPodSandbox for \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\"" May 14 00:14:59.763929 containerd[1513]: time="2025-05-14T00:14:59.763884164Z" level=info msg="Container to stop \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:14:59.763929 containerd[1513]: time="2025-05-14T00:14:59.763897459Z" level=info msg="Container to stop \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:14:59.763929 containerd[1513]: time="2025-05-14T00:14:59.763905234Z" level=info msg="Container to stop \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:14:59.763929 containerd[1513]: time="2025-05-14T00:14:59.763912728Z" level=info msg="Container to stop \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:14:59.764755 containerd[1513]: time="2025-05-14T00:14:59.764683156Z" level=info msg="Container to stop \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:14:59.771691 containerd[1513]: time="2025-05-14T00:14:59.770600587Z" level=info msg="received exit event sandbox_id:\"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" exit_status:137 exited_at:{seconds:1747181699 nanos:729837763}" May 14 00:14:59.771691 containerd[1513]: time="2025-05-14T00:14:59.771077428Z" level=info msg="TearDown network for sandbox \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" successfully" May 14 00:14:59.771691 containerd[1513]: time="2025-05-14T00:14:59.771563645Z" level=info msg="StopPodSandbox for \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" returns successfully" May 14 00:14:59.772042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8-rootfs.mount: Deactivated successfully. May 14 00:14:59.776208 containerd[1513]: time="2025-05-14T00:14:59.773009636Z" level=info msg="shim disconnected" id=e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8 namespace=k8s.io May 14 00:14:59.776208 containerd[1513]: time="2025-05-14T00:14:59.773070449Z" level=warning msg="cleaning up after shim disconnected" id=e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8 namespace=k8s.io May 14 00:14:59.776208 containerd[1513]: time="2025-05-14T00:14:59.773080558Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:14:59.778012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8-shm.mount: Deactivated successfully. May 14 00:14:59.796419 systemd[1]: cri-containerd-69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c.scope: Deactivated successfully. May 14 00:14:59.798591 containerd[1513]: time="2025-05-14T00:14:59.798331995Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" id:\"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" pid:3344 exit_status:137 exited_at:{seconds:1747181699 nanos:797961042}" May 14 00:14:59.832465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c-rootfs.mount: Deactivated successfully. May 14 00:14:59.834084 containerd[1513]: time="2025-05-14T00:14:59.833783819Z" level=info msg="shim disconnected" id=69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c namespace=k8s.io May 14 00:14:59.834084 containerd[1513]: time="2025-05-14T00:14:59.833925054Z" level=warning msg="cleaning up after shim disconnected" id=69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c namespace=k8s.io May 14 00:14:59.834084 containerd[1513]: time="2025-05-14T00:14:59.833932628Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:14:59.851696 containerd[1513]: time="2025-05-14T00:14:59.851638393Z" level=info msg="received exit event sandbox_id:\"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" exit_status:137 exited_at:{seconds:1747181699 nanos:797961042}" May 14 00:14:59.852156 containerd[1513]: time="2025-05-14T00:14:59.851971184Z" level=info msg="TearDown network for sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" successfully" May 14 00:14:59.852156 containerd[1513]: time="2025-05-14T00:14:59.852063768Z" level=info msg="StopPodSandbox for \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" returns successfully" May 14 00:14:59.896259 kubelet[3148]: I0514 00:14:59.896124 3148 scope.go:117] "RemoveContainer" containerID="a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee" May 14 00:14:59.900724 containerd[1513]: time="2025-05-14T00:14:59.900144747Z" level=info msg="RemoveContainer for \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\"" May 14 00:14:59.907486 containerd[1513]: time="2025-05-14T00:14:59.907400778Z" level=info msg="RemoveContainer for \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" returns successfully" May 14 00:14:59.908619 kubelet[3148]: I0514 00:14:59.908588 3148 scope.go:117] "RemoveContainer" containerID="b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759" May 14 00:14:59.910569 containerd[1513]: time="2025-05-14T00:14:59.910104477Z" level=info msg="RemoveContainer for \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\"" May 14 00:14:59.914091 kubelet[3148]: I0514 00:14:59.914057 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d15f95ec-8ce1-4674-8592-bdaecca7c346-cilium-config-path\") pod \"d15f95ec-8ce1-4674-8592-bdaecca7c346\" (UID: \"d15f95ec-8ce1-4674-8592-bdaecca7c346\") " May 14 00:14:59.914241 kubelet[3148]: I0514 00:14:59.914109 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cni-path\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914241 kubelet[3148]: I0514 00:14:59.914132 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c56b5a78-42f9-4058-9b4b-3aab2a24d615-clustermesh-secrets\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914241 kubelet[3148]: I0514 00:14:59.914147 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-xtables-lock\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914241 kubelet[3148]: I0514 00:14:59.914165 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-cgroup\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914241 kubelet[3148]: I0514 00:14:59.914183 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-host-proc-sys-net\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914241 kubelet[3148]: I0514 00:14:59.914198 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-run\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914466 kubelet[3148]: I0514 00:14:59.914217 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-config-path\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914466 kubelet[3148]: I0514 00:14:59.914234 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-host-proc-sys-kernel\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914466 kubelet[3148]: I0514 00:14:59.914251 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-etc-cni-netd\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914466 kubelet[3148]: I0514 00:14:59.914267 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-hostproc\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914466 kubelet[3148]: I0514 00:14:59.914291 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vdpd\" (UniqueName: \"kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-kube-api-access-4vdpd\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914466 kubelet[3148]: I0514 00:14:59.914308 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-bpf-maps\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:14:59.914630 kubelet[3148]: I0514 00:14:59.914329 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b225j\" (UniqueName: \"kubernetes.io/projected/d15f95ec-8ce1-4674-8592-bdaecca7c346-kube-api-access-b225j\") pod \"d15f95ec-8ce1-4674-8592-bdaecca7c346\" (UID: \"d15f95ec-8ce1-4674-8592-bdaecca7c346\") " May 14 00:14:59.916937 kubelet[3148]: I0514 00:14:59.914703 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:14:59.927387 containerd[1513]: time="2025-05-14T00:14:59.927290080Z" level=info msg="RemoveContainer for \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\" returns successfully" May 14 00:14:59.933839 kubelet[3148]: I0514 00:14:59.933099 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:14:59.933839 kubelet[3148]: I0514 00:14:59.933160 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:14:59.933839 kubelet[3148]: I0514 00:14:59.933175 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:14:59.933839 kubelet[3148]: I0514 00:14:59.933187 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-hostproc" (OuterVolumeSpecName: "hostproc") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:14:59.933839 kubelet[3148]: I0514 00:14:59.933567 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15f95ec-8ce1-4674-8592-bdaecca7c346-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d15f95ec-8ce1-4674-8592-bdaecca7c346" (UID: "d15f95ec-8ce1-4674-8592-bdaecca7c346"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:14:59.934119 kubelet[3148]: I0514 00:14:59.933609 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cni-path" (OuterVolumeSpecName: "cni-path") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:14:59.936597 kubelet[3148]: I0514 00:14:59.936562 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-kube-api-access-4vdpd" (OuterVolumeSpecName: "kube-api-access-4vdpd") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "kube-api-access-4vdpd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:14:59.936597 kubelet[3148]: I0514 00:14:59.936561 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56b5a78-42f9-4058-9b4b-3aab2a24d615-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:14:59.936721 kubelet[3148]: I0514 00:14:59.936677 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:14:59.937069 kubelet[3148]: I0514 00:14:59.936769 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:14:59.937069 kubelet[3148]: I0514 00:14:59.936800 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:14:59.937069 kubelet[3148]: I0514 00:14:59.936819 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:14:59.937069 kubelet[3148]: I0514 00:14:59.936934 3148 scope.go:117] "RemoveContainer" containerID="daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4" May 14 00:14:59.938545 kubelet[3148]: I0514 00:14:59.938513 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15f95ec-8ce1-4674-8592-bdaecca7c346-kube-api-access-b225j" (OuterVolumeSpecName: "kube-api-access-b225j") pod "d15f95ec-8ce1-4674-8592-bdaecca7c346" (UID: "d15f95ec-8ce1-4674-8592-bdaecca7c346"). InnerVolumeSpecName "kube-api-access-b225j". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:14:59.939750 containerd[1513]: time="2025-05-14T00:14:59.939719663Z" level=info msg="RemoveContainer for \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\"" May 14 00:14:59.944434 containerd[1513]: time="2025-05-14T00:14:59.944396316Z" level=info msg="RemoveContainer for \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\" returns successfully" May 14 00:14:59.944648 kubelet[3148]: I0514 00:14:59.944620 3148 scope.go:117] "RemoveContainer" containerID="dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b" May 14 00:14:59.945780 containerd[1513]: time="2025-05-14T00:14:59.945758690Z" level=info msg="RemoveContainer for \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\"" May 14 00:14:59.949228 containerd[1513]: time="2025-05-14T00:14:59.949184458Z" level=info msg="RemoveContainer for \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\" returns successfully" May 14 00:14:59.949362 kubelet[3148]: I0514 00:14:59.949335 3148 scope.go:117] "RemoveContainer" containerID="04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59" May 14 00:14:59.950428 containerd[1513]: time="2025-05-14T00:14:59.950395139Z" level=info msg="RemoveContainer for \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\"" May 14 00:14:59.953460 containerd[1513]: time="2025-05-14T00:14:59.953429796Z" level=info msg="RemoveContainer for \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\" returns successfully" May 14 00:14:59.953590 kubelet[3148]: I0514 00:14:59.953565 3148 scope.go:117] "RemoveContainer" containerID="a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee" May 14 00:14:59.961127 containerd[1513]: time="2025-05-14T00:14:59.953730628Z" level=error msg="ContainerStatus for \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\": not found" May 14 00:14:59.969515 kubelet[3148]: E0514 00:14:59.968087 3148 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\": not found" containerID="a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee" May 14 00:14:59.975925 kubelet[3148]: I0514 00:14:59.968168 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee"} err="failed to get container status \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"a117d129c8ab0120d676c8dc67ed4540960219ebe676d854adc8955355f541ee\": not found" May 14 00:14:59.976107 kubelet[3148]: I0514 00:14:59.976091 3148 scope.go:117] "RemoveContainer" containerID="b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759" May 14 00:14:59.976554 containerd[1513]: time="2025-05-14T00:14:59.976504888Z" level=error msg="ContainerStatus for \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\": not found" May 14 00:14:59.976790 kubelet[3148]: E0514 00:14:59.976753 3148 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\": not found" containerID="b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759" May 14 00:14:59.976905 kubelet[3148]: I0514 00:14:59.976859 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759"} err="failed to get container status \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\": rpc error: code = NotFound desc = an error occurred when try to find container \"b53fb347f4a87fb63b92467381bb44228eb38268892fca18e22c325cc4f2e759\": not found" May 14 00:14:59.976905 kubelet[3148]: I0514 00:14:59.976886 3148 scope.go:117] "RemoveContainer" containerID="daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4" May 14 00:14:59.977114 containerd[1513]: time="2025-05-14T00:14:59.977080002Z" level=error msg="ContainerStatus for \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\": not found" May 14 00:14:59.977172 kubelet[3148]: E0514 00:14:59.977158 3148 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\": not found" containerID="daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4" May 14 00:14:59.977214 kubelet[3148]: I0514 00:14:59.977174 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4"} err="failed to get container status \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"daef33a50e6eae410afea0f636ef9fea4d34a0e3a7bb8278b7a2e203985731c4\": not found" May 14 00:14:59.977214 kubelet[3148]: I0514 00:14:59.977188 3148 scope.go:117] "RemoveContainer" containerID="dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b" May 14 00:14:59.977334 containerd[1513]: time="2025-05-14T00:14:59.977315943Z" level=error msg="ContainerStatus for \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\": not found" May 14 00:14:59.977429 kubelet[3148]: E0514 00:14:59.977399 3148 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\": not found" containerID="dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b" May 14 00:14:59.977492 kubelet[3148]: I0514 00:14:59.977472 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b"} err="failed to get container status \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc0ac5479f4243b87bbe303d5b624869f5e69937fb8fd146de8527e0166a300b\": not found" May 14 00:14:59.977492 kubelet[3148]: I0514 00:14:59.977487 3148 scope.go:117] "RemoveContainer" containerID="04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59" May 14 00:14:59.977616 containerd[1513]: time="2025-05-14T00:14:59.977583583Z" level=error msg="ContainerStatus for \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\": not found" May 14 00:14:59.977700 kubelet[3148]: E0514 00:14:59.977674 3148 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\": not found" containerID="04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59" May 14 00:14:59.977700 kubelet[3148]: I0514 00:14:59.977693 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59"} err="failed to get container status \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\": rpc error: code = NotFound desc = an error occurred when try to find container \"04c07a9c4290f8945063aba7c48a24a66f4f3254903e2c79af5faa8df2068d59\": not found" May 14 00:14:59.977767 kubelet[3148]: I0514 00:14:59.977703 3148 scope.go:117] "RemoveContainer" containerID="72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b" May 14 00:14:59.978998 containerd[1513]: time="2025-05-14T00:14:59.978973850Z" level=info msg="RemoveContainer for \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\"" May 14 00:14:59.984059 containerd[1513]: time="2025-05-14T00:14:59.982926370Z" level=info msg="RemoveContainer for \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" returns successfully" May 14 00:14:59.984059 containerd[1513]: time="2025-05-14T00:14:59.983234806Z" level=error msg="ContainerStatus for \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\": not found" May 14 00:14:59.984177 kubelet[3148]: I0514 00:14:59.983081 3148 scope.go:117] "RemoveContainer" containerID="72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b" May 14 00:14:59.984177 kubelet[3148]: E0514 00:14:59.983318 3148 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\": not found" containerID="72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b" May 14 00:14:59.984177 kubelet[3148]: I0514 00:14:59.983335 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b"} err="failed to get container status \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\": rpc error: code = NotFound desc = an error occurred when try to find container \"72b5d6046e360e210a5c90addb7ac14d120e7d785e0fa744a93ec08f721bd93b\": not found" May 14 00:15:00.000240 systemd[1]: Removed slice kubepods-besteffort-podd15f95ec_8ce1_4674_8592_bdaecca7c346.slice - libcontainer container kubepods-besteffort-podd15f95ec_8ce1_4674_8592_bdaecca7c346.slice. May 14 00:15:00.000373 systemd[1]: kubepods-besteffort-podd15f95ec_8ce1_4674_8592_bdaecca7c346.slice: Consumed 780ms CPU time, 25.2M memory peak, 1.6M read from disk, 4K written to disk. May 14 00:15:00.015306 kubelet[3148]: I0514 00:15:00.015251 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-lib-modules\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:15:00.015494 kubelet[3148]: I0514 00:15:00.015330 3148 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-hubble-tls\") pod \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\" (UID: \"c56b5a78-42f9-4058-9b4b-3aab2a24d615\") " May 14 00:15:00.015607 kubelet[3148]: I0514 00:15:00.015572 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:15:00.017152 kubelet[3148]: I0514 00:15:00.017116 3148 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d15f95ec-8ce1-4674-8592-bdaecca7c346-cilium-config-path\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017152 kubelet[3148]: I0514 00:15:00.017143 3148 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cni-path\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017299 kubelet[3148]: I0514 00:15:00.017157 3148 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c56b5a78-42f9-4058-9b4b-3aab2a24d615-clustermesh-secrets\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017299 kubelet[3148]: I0514 00:15:00.017168 3148 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-xtables-lock\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017299 kubelet[3148]: I0514 00:15:00.017178 3148 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-cgroup\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017299 kubelet[3148]: I0514 00:15:00.017187 3148 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-etc-cni-netd\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017299 kubelet[3148]: I0514 00:15:00.017198 3148 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-hostproc\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017299 kubelet[3148]: I0514 00:15:00.017208 3148 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-host-proc-sys-net\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017299 kubelet[3148]: I0514 00:15:00.017216 3148 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-run\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017299 kubelet[3148]: I0514 00:15:00.017227 3148 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c56b5a78-42f9-4058-9b4b-3aab2a24d615-cilium-config-path\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017683 kubelet[3148]: I0514 00:15:00.017238 3148 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-host-proc-sys-kernel\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017683 kubelet[3148]: I0514 00:15:00.017248 3148 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4vdpd\" (UniqueName: \"kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-kube-api-access-4vdpd\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017683 kubelet[3148]: I0514 00:15:00.017258 3148 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-bpf-maps\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.017683 kubelet[3148]: I0514 00:15:00.017268 3148 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b225j\" (UniqueName: \"kubernetes.io/projected/d15f95ec-8ce1-4674-8592-bdaecca7c346-kube-api-access-b225j\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.020018 kubelet[3148]: I0514 00:15:00.019954 3148 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c56b5a78-42f9-4058-9b4b-3aab2a24d615" (UID: "c56b5a78-42f9-4058-9b4b-3aab2a24d615"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:15:00.118539 kubelet[3148]: I0514 00:15:00.118434 3148 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c56b5a78-42f9-4058-9b4b-3aab2a24d615-lib-modules\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.118539 kubelet[3148]: I0514 00:15:00.118507 3148 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c56b5a78-42f9-4058-9b4b-3aab2a24d615-hubble-tls\") on node \"ci-4284-0-0-n-fdde459219\" DevicePath \"\"" May 14 00:15:00.191491 systemd[1]: Removed slice kubepods-burstable-podc56b5a78_42f9_4058_9b4b_3aab2a24d615.slice - libcontainer container kubepods-burstable-podc56b5a78_42f9_4058_9b4b_3aab2a24d615.slice. May 14 00:15:00.191884 systemd[1]: kubepods-burstable-podc56b5a78_42f9_4058_9b4b_3aab2a24d615.slice: Consumed 8.401s CPU time, 159.4M memory peak, 32.6M read from disk, 13.3M written to disk. May 14 00:15:00.671327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c-shm.mount: Deactivated successfully. May 14 00:15:00.671840 systemd[1]: var-lib-kubelet-pods-c56b5a78\x2d42f9\x2d4058\x2d9b4b\x2d3aab2a24d615-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4vdpd.mount: Deactivated successfully. May 14 00:15:00.672006 systemd[1]: var-lib-kubelet-pods-d15f95ec\x2d8ce1\x2d4674\x2d8592\x2dbdaecca7c346-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db225j.mount: Deactivated successfully. May 14 00:15:00.672237 systemd[1]: var-lib-kubelet-pods-c56b5a78\x2d42f9\x2d4058\x2d9b4b\x2d3aab2a24d615-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:15:00.672394 systemd[1]: var-lib-kubelet-pods-c56b5a78\x2d42f9\x2d4058\x2d9b4b\x2d3aab2a24d615-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:15:01.662765 sshd[4668]: Connection closed by 139.178.89.65 port 57404 May 14 00:15:01.663784 sshd-session[4666]: pam_unix(sshd:session): session closed for user core May 14 00:15:01.668488 systemd[1]: sshd@21-95.217.191.100:22-139.178.89.65:57404.service: Deactivated successfully. May 14 00:15:01.670980 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:15:01.674476 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. May 14 00:15:01.676711 systemd-logind[1497]: Removed session 20. May 14 00:15:01.832954 systemd[1]: Started sshd@22-95.217.191.100:22-139.178.89.65:57406.service - OpenSSH per-connection server daemon (139.178.89.65:57406). May 14 00:15:01.986261 kubelet[3148]: I0514 00:15:01.986209 3148 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56b5a78-42f9-4058-9b4b-3aab2a24d615" path="/var/lib/kubelet/pods/c56b5a78-42f9-4058-9b4b-3aab2a24d615/volumes" May 14 00:15:01.986909 kubelet[3148]: I0514 00:15:01.986876 3148 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15f95ec-8ce1-4674-8592-bdaecca7c346" path="/var/lib/kubelet/pods/d15f95ec-8ce1-4674-8592-bdaecca7c346/volumes" May 14 00:15:02.844219 sshd[4822]: Accepted publickey for core from 139.178.89.65 port 57406 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:15:02.846689 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:15:02.856127 systemd-logind[1497]: New session 21 of user core. May 14 00:15:02.866788 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:15:03.124350 kubelet[3148]: E0514 00:15:03.123801 3148 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:15:03.781140 containerd[1513]: time="2025-05-14T00:15:03.780648918Z" level=warning msg="container event discarded" container=3df0ad6d3630939720689d4a9527afc14a48e295b36cc1138f9c1c7b9feddc26 type=CONTAINER_CREATED_EVENT May 14 00:15:03.781140 containerd[1513]: time="2025-05-14T00:15:03.780774201Z" level=warning msg="container event discarded" container=3df0ad6d3630939720689d4a9527afc14a48e295b36cc1138f9c1c7b9feddc26 type=CONTAINER_STARTED_EVENT May 14 00:15:03.840509 containerd[1513]: time="2025-05-14T00:15:03.840360777Z" level=warning msg="container event discarded" container=78a1cf7fd7ffd2b0367d0537b9e3cdf6316d08797603428730186eabdf6b0448 type=CONTAINER_CREATED_EVENT May 14 00:15:03.936548 containerd[1513]: time="2025-05-14T00:15:03.936421374Z" level=warning msg="container event discarded" container=456e583e01a8c74a9e39db1f13d9a73414011a04fa4b121c53e793d87d2fe7f3 type=CONTAINER_CREATED_EVENT May 14 00:15:03.936548 containerd[1513]: time="2025-05-14T00:15:03.936521500Z" level=warning msg="container event discarded" container=456e583e01a8c74a9e39db1f13d9a73414011a04fa4b121c53e793d87d2fe7f3 type=CONTAINER_STARTED_EVENT May 14 00:15:03.948057 containerd[1513]: time="2025-05-14T00:15:03.947932190Z" level=warning msg="container event discarded" container=78a1cf7fd7ffd2b0367d0537b9e3cdf6316d08797603428730186eabdf6b0448 type=CONTAINER_STARTED_EVENT May 14 00:15:03.948057 containerd[1513]: time="2025-05-14T00:15:03.947996381Z" level=warning msg="container event discarded" container=d39ecb6c48d18077fe74a6ec95bdda5a28fbbe3d97ebb28f8e4376cc54293576 type=CONTAINER_CREATED_EVENT May 14 00:15:04.009613 containerd[1513]: time="2025-05-14T00:15:04.009345862Z" level=warning msg="container event discarded" container=d39ecb6c48d18077fe74a6ec95bdda5a28fbbe3d97ebb28f8e4376cc54293576 type=CONTAINER_STARTED_EVENT May 14 00:15:04.062494 kubelet[3148]: I0514 00:15:04.061767 3148 topology_manager.go:215] "Topology Admit Handler" podUID="59587b8b-5626-4a8c-ae98-9502a4a18713" podNamespace="kube-system" podName="cilium-75z97" May 14 00:15:04.062494 kubelet[3148]: E0514 00:15:04.061871 3148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c56b5a78-42f9-4058-9b4b-3aab2a24d615" containerName="apply-sysctl-overwrites" May 14 00:15:04.062494 kubelet[3148]: E0514 00:15:04.061884 3148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c56b5a78-42f9-4058-9b4b-3aab2a24d615" containerName="clean-cilium-state" May 14 00:15:04.062494 kubelet[3148]: E0514 00:15:04.061892 3148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c56b5a78-42f9-4058-9b4b-3aab2a24d615" containerName="cilium-agent" May 14 00:15:04.062494 kubelet[3148]: E0514 00:15:04.061901 3148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d15f95ec-8ce1-4674-8592-bdaecca7c346" containerName="cilium-operator" May 14 00:15:04.062494 kubelet[3148]: E0514 00:15:04.061909 3148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c56b5a78-42f9-4058-9b4b-3aab2a24d615" containerName="mount-cgroup" May 14 00:15:04.062494 kubelet[3148]: E0514 00:15:04.061916 3148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c56b5a78-42f9-4058-9b4b-3aab2a24d615" containerName="mount-bpf-fs" May 14 00:15:04.062494 kubelet[3148]: I0514 00:15:04.061946 3148 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15f95ec-8ce1-4674-8592-bdaecca7c346" containerName="cilium-operator" May 14 00:15:04.062494 kubelet[3148]: I0514 00:15:04.061955 3148 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56b5a78-42f9-4058-9b4b-3aab2a24d615" containerName="cilium-agent" May 14 00:15:04.096337 systemd[1]: Created slice kubepods-burstable-pod59587b8b_5626_4a8c_ae98_9502a4a18713.slice - libcontainer container kubepods-burstable-pod59587b8b_5626_4a8c_ae98_9502a4a18713.slice. May 14 00:15:04.147428 kubelet[3148]: I0514 00:15:04.146907 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-xtables-lock\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.147428 kubelet[3148]: I0514 00:15:04.146960 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59587b8b-5626-4a8c-ae98-9502a4a18713-cilium-config-path\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.147428 kubelet[3148]: I0514 00:15:04.146988 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59587b8b-5626-4a8c-ae98-9502a4a18713-hubble-tls\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.147428 kubelet[3148]: I0514 00:15:04.147013 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-cilium-run\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.147428 kubelet[3148]: I0514 00:15:04.147088 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-bpf-maps\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.147428 kubelet[3148]: I0514 00:15:04.147122 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59587b8b-5626-4a8c-ae98-9502a4a18713-clustermesh-secrets\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.148120 kubelet[3148]: I0514 00:15:04.147164 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-etc-cni-netd\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.148120 kubelet[3148]: I0514 00:15:04.147198 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-cilium-cgroup\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.148120 kubelet[3148]: I0514 00:15:04.147224 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-cni-path\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.148120 kubelet[3148]: I0514 00:15:04.147250 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/59587b8b-5626-4a8c-ae98-9502a4a18713-cilium-ipsec-secrets\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.148120 kubelet[3148]: I0514 00:15:04.147274 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm5t5\" (UniqueName: \"kubernetes.io/projected/59587b8b-5626-4a8c-ae98-9502a4a18713-kube-api-access-pm5t5\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.148120 kubelet[3148]: I0514 00:15:04.147304 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-hostproc\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.148316 kubelet[3148]: I0514 00:15:04.147328 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-lib-modules\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.148316 kubelet[3148]: I0514 00:15:04.147348 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-host-proc-sys-kernel\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.148316 kubelet[3148]: I0514 00:15:04.147381 3148 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59587b8b-5626-4a8c-ae98-9502a4a18713-host-proc-sys-net\") pod \"cilium-75z97\" (UID: \"59587b8b-5626-4a8c-ae98-9502a4a18713\") " pod="kube-system/cilium-75z97" May 14 00:15:04.210338 sshd[4824]: Connection closed by 139.178.89.65 port 57406 May 14 00:15:04.211214 sshd-session[4822]: pam_unix(sshd:session): session closed for user core May 14 00:15:04.215420 systemd[1]: sshd@22-95.217.191.100:22-139.178.89.65:57406.service: Deactivated successfully. May 14 00:15:04.217949 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:15:04.220827 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. May 14 00:15:04.223942 systemd-logind[1497]: Removed session 21. May 14 00:15:04.380541 systemd[1]: Started sshd@23-95.217.191.100:22-139.178.89.65:57422.service - OpenSSH per-connection server daemon (139.178.89.65:57422). May 14 00:15:04.410386 containerd[1513]: time="2025-05-14T00:15:04.409976456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75z97,Uid:59587b8b-5626-4a8c-ae98-9502a4a18713,Namespace:kube-system,Attempt:0,}" May 14 00:15:04.437124 containerd[1513]: time="2025-05-14T00:15:04.437066218Z" level=info msg="connecting to shim 5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a" address="unix:///run/containerd/s/f52ea419582cd182f07bb435e1e9de9277ef2efd411540af7887a643eeea8e9b" namespace=k8s.io protocol=ttrpc version=3 May 14 00:15:04.463433 systemd[1]: Started cri-containerd-5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a.scope - libcontainer container 5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a. May 14 00:15:04.502139 containerd[1513]: time="2025-05-14T00:15:04.501804325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75z97,Uid:59587b8b-5626-4a8c-ae98-9502a4a18713,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\"" May 14 00:15:04.506238 containerd[1513]: time="2025-05-14T00:15:04.506178820Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:15:04.513073 containerd[1513]: time="2025-05-14T00:15:04.512964951Z" level=info msg="Container 8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be: CDI devices from CRI Config.CDIDevices: []" May 14 00:15:04.519461 containerd[1513]: time="2025-05-14T00:15:04.519391100Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be\"" May 14 00:15:04.522393 containerd[1513]: time="2025-05-14T00:15:04.521360610Z" level=info msg="StartContainer for \"8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be\"" May 14 00:15:04.522393 containerd[1513]: time="2025-05-14T00:15:04.522130179Z" level=info msg="connecting to shim 8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be" address="unix:///run/containerd/s/f52ea419582cd182f07bb435e1e9de9277ef2efd411540af7887a643eeea8e9b" protocol=ttrpc version=3 May 14 00:15:04.550364 systemd[1]: Started cri-containerd-8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be.scope - libcontainer container 8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be. May 14 00:15:04.594584 containerd[1513]: time="2025-05-14T00:15:04.594505296Z" level=info msg="StartContainer for \"8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be\" returns successfully" May 14 00:15:04.608290 systemd[1]: cri-containerd-8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be.scope: Deactivated successfully. May 14 00:15:04.611283 containerd[1513]: time="2025-05-14T00:15:04.611194263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be\" id:\"8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be\" pid:4900 exited_at:{seconds:1747181704 nanos:609663964}" May 14 00:15:04.611283 containerd[1513]: time="2025-05-14T00:15:04.611268733Z" level=info msg="received exit event container_id:\"8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be\" id:\"8facdfb857c459846831852eeb52655925f737b759a76a1efc24325fc90786be\" pid:4900 exited_at:{seconds:1747181704 nanos:609663964}" May 14 00:15:04.910107 containerd[1513]: time="2025-05-14T00:15:04.909982663Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:15:04.922799 containerd[1513]: time="2025-05-14T00:15:04.921860539Z" level=info msg="Container 273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a: CDI devices from CRI Config.CDIDevices: []" May 14 00:15:04.932771 containerd[1513]: time="2025-05-14T00:15:04.932679687Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a\"" May 14 00:15:04.934072 containerd[1513]: time="2025-05-14T00:15:04.933988964Z" level=info msg="StartContainer for \"273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a\"" May 14 00:15:04.935439 containerd[1513]: time="2025-05-14T00:15:04.935370054Z" level=info msg="connecting to shim 273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a" address="unix:///run/containerd/s/f52ea419582cd182f07bb435e1e9de9277ef2efd411540af7887a643eeea8e9b" protocol=ttrpc version=3 May 14 00:15:04.973463 systemd[1]: Started cri-containerd-273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a.scope - libcontainer container 273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a. May 14 00:15:05.021467 containerd[1513]: time="2025-05-14T00:15:05.021354370Z" level=info msg="StartContainer for \"273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a\" returns successfully" May 14 00:15:05.031929 systemd[1]: cri-containerd-273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a.scope: Deactivated successfully. May 14 00:15:05.034612 containerd[1513]: time="2025-05-14T00:15:05.034536607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a\" id:\"273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a\" pid:4943 exited_at:{seconds:1747181705 nanos:33985777}" May 14 00:15:05.034797 containerd[1513]: time="2025-05-14T00:15:05.034755344Z" level=info msg="received exit event container_id:\"273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a\" id:\"273d46a05be008a79f87b7138ca6c2e436c32fd9e7694e4b3621f3fad4e73b1a\" pid:4943 exited_at:{seconds:1747181705 nanos:33985777}" May 14 00:15:05.374806 sshd[4839]: Accepted publickey for core from 139.178.89.65 port 57422 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:15:05.377313 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:15:05.386511 systemd-logind[1497]: New session 22 of user core. May 14 00:15:05.394363 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:15:05.925059 containerd[1513]: time="2025-05-14T00:15:05.924167876Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:15:05.946638 containerd[1513]: time="2025-05-14T00:15:05.944570608Z" level=info msg="Container 0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd: CDI devices from CRI Config.CDIDevices: []" May 14 00:15:05.963902 kubelet[3148]: I0514 00:15:05.963845 3148 setters.go:580] "Node became not ready" node="ci-4284-0-0-n-fdde459219" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T00:15:05Z","lastTransitionTime":"2025-05-14T00:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 00:15:05.970436 containerd[1513]: time="2025-05-14T00:15:05.969971520Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd\"" May 14 00:15:05.972461 containerd[1513]: time="2025-05-14T00:15:05.970713057Z" level=info msg="StartContainer for \"0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd\"" May 14 00:15:05.974167 containerd[1513]: time="2025-05-14T00:15:05.974135864Z" level=info msg="connecting to shim 0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd" address="unix:///run/containerd/s/f52ea419582cd182f07bb435e1e9de9277ef2efd411540af7887a643eeea8e9b" protocol=ttrpc version=3 May 14 00:15:06.006536 systemd[1]: Started cri-containerd-0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd.scope - libcontainer container 0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd. May 14 00:15:06.044814 containerd[1513]: time="2025-05-14T00:15:06.044778793Z" level=info msg="StartContainer for \"0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd\" returns successfully" May 14 00:15:06.048891 sshd[4974]: Connection closed by 139.178.89.65 port 57422 May 14 00:15:06.050572 sshd-session[4839]: pam_unix(sshd:session): session closed for user core May 14 00:15:06.054331 systemd[1]: sshd@23-95.217.191.100:22-139.178.89.65:57422.service: Deactivated successfully. May 14 00:15:06.056785 containerd[1513]: time="2025-05-14T00:15:06.056753677Z" level=info msg="received exit event container_id:\"0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd\" id:\"0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd\" pid:4991 exited_at:{seconds:1747181706 nanos:55946047}" May 14 00:15:06.057253 containerd[1513]: time="2025-05-14T00:15:06.056989517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd\" id:\"0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd\" pid:4991 exited_at:{seconds:1747181706 nanos:55946047}" May 14 00:15:06.058609 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:15:06.059556 systemd[1]: cri-containerd-0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd.scope: Deactivated successfully. May 14 00:15:06.063961 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. May 14 00:15:06.065553 systemd-logind[1497]: Removed session 22. May 14 00:15:06.082429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e03c8e7dfdf0e26d565b08af5b9554dd01d79ef827e07a756959e3974f638bd-rootfs.mount: Deactivated successfully. May 14 00:15:06.223062 systemd[1]: Started sshd@24-95.217.191.100:22-139.178.89.65:57424.service - OpenSSH per-connection server daemon (139.178.89.65:57424). May 14 00:15:06.920806 containerd[1513]: time="2025-05-14T00:15:06.920726499Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:15:06.935859 containerd[1513]: time="2025-05-14T00:15:06.935061083Z" level=info msg="Container 37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302: CDI devices from CRI Config.CDIDevices: []" May 14 00:15:06.947897 containerd[1513]: time="2025-05-14T00:15:06.947850209Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302\"" May 14 00:15:06.948905 containerd[1513]: time="2025-05-14T00:15:06.948868722Z" level=info msg="StartContainer for \"37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302\"" May 14 00:15:06.950121 containerd[1513]: time="2025-05-14T00:15:06.950079515Z" level=info msg="connecting to shim 37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302" address="unix:///run/containerd/s/f52ea419582cd182f07bb435e1e9de9277ef2efd411540af7887a643eeea8e9b" protocol=ttrpc version=3 May 14 00:15:06.978329 systemd[1]: Started cri-containerd-37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302.scope - libcontainer container 37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302. May 14 00:15:07.016549 systemd[1]: cri-containerd-37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302.scope: Deactivated successfully. May 14 00:15:07.019855 containerd[1513]: time="2025-05-14T00:15:07.019801874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302\" id:\"37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302\" pid:5036 exited_at:{seconds:1747181707 nanos:19271082}" May 14 00:15:07.020649 containerd[1513]: time="2025-05-14T00:15:07.020516590Z" level=info msg="received exit event container_id:\"37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302\" id:\"37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302\" pid:5036 exited_at:{seconds:1747181707 nanos:19271082}" May 14 00:15:07.031595 containerd[1513]: time="2025-05-14T00:15:07.031543525Z" level=info msg="StartContainer for \"37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302\" returns successfully" May 14 00:15:07.050223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37bad8ec77785e7f165da12cfdc435282c4e70b8f20d9bdc836b052afae29302-rootfs.mount: Deactivated successfully. May 14 00:15:07.222110 sshd[5022]: Accepted publickey for core from 139.178.89.65 port 57424 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:15:07.223221 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:15:07.233111 systemd-logind[1497]: New session 23 of user core. May 14 00:15:07.243329 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:15:07.927064 containerd[1513]: time="2025-05-14T00:15:07.926955556Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:15:07.955132 containerd[1513]: time="2025-05-14T00:15:07.953046041Z" level=info msg="Container 38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9: CDI devices from CRI Config.CDIDevices: []" May 14 00:15:07.959725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373827055.mount: Deactivated successfully. May 14 00:15:07.972953 containerd[1513]: time="2025-05-14T00:15:07.972878787Z" level=info msg="CreateContainer within sandbox \"5f3f5587d38ce74fb4787249550a2660e959d667099bd2a183b50c7ef3cf779a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9\"" May 14 00:15:07.975657 containerd[1513]: time="2025-05-14T00:15:07.974171694Z" level=info msg="StartContainer for \"38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9\"" May 14 00:15:07.975657 containerd[1513]: time="2025-05-14T00:15:07.975224382Z" level=info msg="connecting to shim 38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9" address="unix:///run/containerd/s/f52ea419582cd182f07bb435e1e9de9277ef2efd411540af7887a643eeea8e9b" protocol=ttrpc version=3 May 14 00:15:08.011339 systemd[1]: Started cri-containerd-38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9.scope - libcontainer container 38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9. May 14 00:15:08.064982 containerd[1513]: time="2025-05-14T00:15:08.064855396Z" level=info msg="StartContainer for \"38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9\" returns successfully" May 14 00:15:08.126147 kubelet[3148]: E0514 00:15:08.126093 3148 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:15:08.162402 containerd[1513]: time="2025-05-14T00:15:08.162352593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9\" id:\"c534686cb92d4ef25aaaff47ab7fcafc510e5f734cc28a2070c8da695fa7a692\" pid:5111 exited_at:{seconds:1747181708 nanos:161753232}" May 14 00:15:08.592076 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 14 00:15:08.959690 kubelet[3148]: I0514 00:15:08.958756 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-75z97" podStartSLOduration=4.958734983 podStartE2EDuration="4.958734983s" podCreationTimestamp="2025-05-14 00:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:15:08.957003886 +0000 UTC m=+341.137317057" watchObservedRunningTime="2025-05-14 00:15:08.958734983 +0000 UTC m=+341.139048154" May 14 00:15:10.287881 containerd[1513]: time="2025-05-14T00:15:10.287837501Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9\" id:\"ad71a89bc838d81e4da3d5cc1c554efb13da9d269295ead807470e5d73a5abd6\" pid:5291 exit_status:1 exited_at:{seconds:1747181710 nanos:287470664}" May 14 00:15:10.305354 kubelet[3148]: E0514 00:15:10.305297 3148 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49394->127.0.0.1:37419: write tcp 127.0.0.1:49394->127.0.0.1:37419: write: broken pipe May 14 00:15:11.751084 systemd-networkd[1405]: lxc_health: Link UP May 14 00:15:11.751298 systemd-networkd[1405]: lxc_health: Gained carrier May 14 00:15:12.546378 containerd[1513]: time="2025-05-14T00:15:12.546315376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9\" id:\"85607be4bb2a43fcbe7f0c06d5836971658d4d42ac3c00e3749ed3172aff7af9\" pid:5675 exited_at:{seconds:1747181712 nanos:541813013}" May 14 00:15:13.160471 systemd-networkd[1405]: lxc_health: Gained IPv6LL May 14 00:15:14.733915 containerd[1513]: time="2025-05-14T00:15:14.733869496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9\" id:\"486db952c782c5a9ebc5d90bffadde82aea17238eea04e8260485c9d501016ce\" pid:5713 exited_at:{seconds:1747181714 nanos:733352429}" May 14 00:15:16.894092 containerd[1513]: time="2025-05-14T00:15:16.894042457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a059bd3ed0be4a1e55a67497bcd305d3c12c54062dbe528363342240de12c9\" id:\"b8ec583502a7bcaa110dac7d80b37179f1b2241f3355328e8a74a76a0d556063\" pid:5742 exited_at:{seconds:1747181716 nanos:893529827}" May 14 00:15:17.057308 sshd[5062]: Connection closed by 139.178.89.65 port 57424 May 14 00:15:17.058267 sshd-session[5022]: pam_unix(sshd:session): session closed for user core May 14 00:15:17.061384 systemd[1]: sshd@24-95.217.191.100:22-139.178.89.65:57424.service: Deactivated successfully. May 14 00:15:17.062851 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:15:17.063606 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. May 14 00:15:17.065178 systemd-logind[1497]: Removed session 23. May 14 00:15:27.973565 containerd[1513]: time="2025-05-14T00:15:27.973433993Z" level=info msg="StopPodSandbox for \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\"" May 14 00:15:27.976150 containerd[1513]: time="2025-05-14T00:15:27.973654185Z" level=info msg="TearDown network for sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" successfully" May 14 00:15:27.976150 containerd[1513]: time="2025-05-14T00:15:27.973673361Z" level=info msg="StopPodSandbox for \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" returns successfully" May 14 00:15:27.976150 containerd[1513]: time="2025-05-14T00:15:27.974290958Z" level=info msg="RemovePodSandbox for \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\"" May 14 00:15:27.976150 containerd[1513]: time="2025-05-14T00:15:27.974335511Z" level=info msg="Forcibly stopping sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\"" May 14 00:15:27.976150 containerd[1513]: time="2025-05-14T00:15:27.974488950Z" level=info msg="TearDown network for sandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" successfully" May 14 00:15:27.984964 containerd[1513]: time="2025-05-14T00:15:27.984894527Z" level=info msg="Ensure that sandbox 69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c in task-service has been cleanup successfully" May 14 00:15:27.993553 containerd[1513]: time="2025-05-14T00:15:27.993427707Z" level=info msg="RemovePodSandbox \"69ffd32419e348c43191c8a88777b742ccb7c30a2f195bfdcece5dfedf87fd0c\" returns successfully" May 14 00:15:28.000200 containerd[1513]: time="2025-05-14T00:15:28.000130647Z" level=info msg="StopPodSandbox for \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\"" May 14 00:15:28.000466 containerd[1513]: time="2025-05-14T00:15:28.000370507Z" level=info msg="TearDown network for sandbox \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" successfully" May 14 00:15:28.000466 containerd[1513]: time="2025-05-14T00:15:28.000403559Z" level=info msg="StopPodSandbox for \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" returns successfully" May 14 00:15:28.001499 containerd[1513]: time="2025-05-14T00:15:28.001444560Z" level=info msg="RemovePodSandbox for \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\"" May 14 00:15:28.001580 containerd[1513]: time="2025-05-14T00:15:28.001508850Z" level=info msg="Forcibly stopping sandbox \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\"" May 14 00:15:28.001670 containerd[1513]: time="2025-05-14T00:15:28.001645345Z" level=info msg="TearDown network for sandbox \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" successfully" May 14 00:15:28.003773 containerd[1513]: time="2025-05-14T00:15:28.003738748Z" level=info msg="Ensure that sandbox e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8 in task-service has been cleanup successfully" May 14 00:15:28.012272 containerd[1513]: time="2025-05-14T00:15:28.012207710Z" level=info msg="RemovePodSandbox \"e46a5d74fe2e14cb405179a60714ad36745ee06cd4f4cdcc00598a2259ffe7e8\" returns successfully" May 14 00:15:37.201357 systemd[1]: cri-containerd-6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7.scope: Deactivated successfully. May 14 00:15:37.201806 systemd[1]: cri-containerd-6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7.scope: Consumed 7.065s CPU time, 73.8M memory peak, 19.8M read from disk. May 14 00:15:37.205245 containerd[1513]: time="2025-05-14T00:15:37.205138116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7\" id:\"6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7\" pid:2980 exit_status:1 exited_at:{seconds:1747181737 nanos:204476836}" May 14 00:15:37.205986 containerd[1513]: time="2025-05-14T00:15:37.205247812Z" level=info msg="received exit event container_id:\"6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7\" id:\"6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7\" pid:2980 exit_status:1 exited_at:{seconds:1747181737 nanos:204476836}" May 14 00:15:37.249630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7-rootfs.mount: Deactivated successfully. May 14 00:15:37.301484 kubelet[3148]: E0514 00:15:37.301405 3148 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:37382->10.0.0.2:2379: read: connection timed out" May 14 00:15:38.013590 kubelet[3148]: I0514 00:15:38.013547 3148 scope.go:117] "RemoveContainer" containerID="6c0a3053192afb1985fbf29b6abb0e2d848f7309453b8106c215564dac6816a7" May 14 00:15:38.020714 containerd[1513]: time="2025-05-14T00:15:38.020670706Z" level=info msg="CreateContainer within sandbox \"2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 14 00:15:38.047410 containerd[1513]: time="2025-05-14T00:15:38.047337691Z" level=info msg="Container 1033802c84cb1280aebcdc245314b77df339902103ec29a814974f5b0c92c93b: CDI devices from CRI Config.CDIDevices: []" May 14 00:15:38.055968 containerd[1513]: time="2025-05-14T00:15:38.055909391Z" level=info msg="CreateContainer within sandbox \"2229ce920f9ccf1a3c1c58a4eda16de4dcc6318b786757697e238d423e54a289\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1033802c84cb1280aebcdc245314b77df339902103ec29a814974f5b0c92c93b\"" May 14 00:15:38.056723 containerd[1513]: time="2025-05-14T00:15:38.056692110Z" level=info msg="StartContainer for \"1033802c84cb1280aebcdc245314b77df339902103ec29a814974f5b0c92c93b\"" May 14 00:15:38.058399 containerd[1513]: time="2025-05-14T00:15:38.058347135Z" level=info msg="connecting to shim 1033802c84cb1280aebcdc245314b77df339902103ec29a814974f5b0c92c93b" address="unix:///run/containerd/s/37e02c690ecdbea2650114137114b9555805c85f740364494a3e43cc392ba331" protocol=ttrpc version=3 May 14 00:15:38.090480 systemd[1]: Started cri-containerd-1033802c84cb1280aebcdc245314b77df339902103ec29a814974f5b0c92c93b.scope - libcontainer container 1033802c84cb1280aebcdc245314b77df339902103ec29a814974f5b0c92c93b. May 14 00:15:38.187280 containerd[1513]: time="2025-05-14T00:15:38.186349390Z" level=info msg="StartContainer for \"1033802c84cb1280aebcdc245314b77df339902103ec29a814974f5b0c92c93b\" returns successfully"