May 9 00:10:55.862557 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:21:52 -00 2025 May 9 00:10:55.862576 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:10:55.862583 kernel: BIOS-provided physical RAM map: May 9 00:10:55.862588 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 9 00:10:55.862593 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 9 00:10:55.862598 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 9 00:10:55.862603 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 9 00:10:55.862608 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 9 00:10:55.862614 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 9 00:10:55.862618 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 9 00:10:55.862635 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 9 00:10:55.862640 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 9 00:10:55.862644 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 9 00:10:55.862649 kernel: NX (Execute Disable) protection: active May 9 00:10:55.862657 kernel: APIC: Static calls initialized May 9 00:10:55.862662 kernel: SMBIOS 3.0.0 present. May 9 00:10:55.862667 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 9 00:10:55.862672 kernel: Hypervisor detected: KVM May 9 00:10:55.862677 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:10:55.862682 kernel: kvm-clock: using sched offset of 3178218273 cycles May 9 00:10:55.862687 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:10:55.862692 kernel: tsc: Detected 2445.404 MHz processor May 9 00:10:55.862697 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:10:55.862704 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:10:55.862709 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 9 00:10:55.862715 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 9 00:10:55.862720 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:10:55.862725 kernel: Using GB pages for direct mapping May 9 00:10:55.862730 kernel: ACPI: Early table checksum verification disabled May 9 00:10:55.862735 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 9 00:10:55.862740 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:10:55.862745 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:10:55.862751 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:10:55.862756 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 9 00:10:55.862761 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:10:55.862766 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:10:55.862771 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:10:55.862776 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:10:55.862781 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] May 9 00:10:55.862786 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] May 9 00:10:55.862795 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 9 00:10:55.862800 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] May 9 00:10:55.862806 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] May 9 00:10:55.862811 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] May 9 00:10:55.862816 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] May 9 00:10:55.862821 kernel: No NUMA configuration found May 9 00:10:55.862831 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 9 00:10:55.862851 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] May 9 00:10:55.862863 kernel: Zone ranges: May 9 00:10:55.862874 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:10:55.862881 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 9 00:10:55.862890 kernel: Normal empty May 9 00:10:55.862900 kernel: Movable zone start for each node May 9 00:10:55.862909 kernel: Early memory node ranges May 9 00:10:55.862919 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 9 00:10:55.862929 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 9 00:10:55.862943 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 9 00:10:55.862954 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:10:55.862964 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 9 00:10:55.862974 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 9 00:10:55.862984 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:10:55.862989 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:10:55.862994 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:10:55.863000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:10:55.863005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:10:55.863012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:10:55.863018 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:10:55.863023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:10:55.863028 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:10:55.863033 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:10:55.863039 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 9 00:10:55.863044 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:10:55.863049 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 9 00:10:55.863054 kernel: Booting paravirtualized kernel on KVM May 9 00:10:55.863061 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:10:55.863067 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 9 00:10:55.863072 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 9 00:10:55.863077 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 9 00:10:55.863083 kernel: pcpu-alloc: [0] 0 1 May 9 00:10:55.863093 kernel: kvm-guest: PV spinlocks disabled, no host support May 9 00:10:55.863102 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:10:55.863108 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:10:55.863115 kernel: random: crng init done May 9 00:10:55.863121 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:10:55.863126 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 9 00:10:55.863131 kernel: Fallback order for Node 0: 0 May 9 00:10:55.863137 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 May 9 00:10:55.863142 kernel: Policy zone: DMA32 May 9 00:10:55.863147 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:10:55.863153 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22752K rodata, 43000K init, 2192K bss, 125152K reserved, 0K cma-reserved) May 9 00:10:55.863158 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 00:10:55.863163 kernel: ftrace: allocating 37946 entries in 149 pages May 9 00:10:55.863170 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:10:55.863175 kernel: Dynamic Preempt: voluntary May 9 00:10:55.863180 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:10:55.863189 kernel: rcu: RCU event tracing is enabled. May 9 00:10:55.863198 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 00:10:55.863206 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:10:55.863215 kernel: Rude variant of Tasks RCU enabled. May 9 00:10:55.863220 kernel: Tracing variant of Tasks RCU enabled. May 9 00:10:55.863226 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:10:55.863233 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 00:10:55.863238 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 9 00:10:55.863244 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:10:55.863249 kernel: Console: colour VGA+ 80x25 May 9 00:10:55.863254 kernel: printk: console [tty0] enabled May 9 00:10:55.863259 kernel: printk: console [ttyS0] enabled May 9 00:10:55.863265 kernel: ACPI: Core revision 20230628 May 9 00:10:55.863270 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:10:55.863275 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:10:55.863282 kernel: x2apic enabled May 9 00:10:55.863287 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:10:55.863293 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:10:55.863368 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:10:55.863384 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) May 9 00:10:55.863391 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:10:55.863396 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:10:55.863402 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:10:55.863415 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:10:55.863421 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:10:55.863427 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:10:55.863432 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:10:55.863440 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:10:55.863445 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:10:55.863451 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:10:55.863457 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:10:55.863463 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:10:55.863470 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:10:55.863475 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:10:55.863481 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:10:55.863487 kernel: Freeing SMP alternatives memory: 32K May 9 00:10:55.863492 kernel: pid_max: default: 32768 minimum: 301 May 9 00:10:55.863498 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:10:55.863503 kernel: landlock: Up and running. May 9 00:10:55.863509 kernel: SELinux: Initializing. May 9 00:10:55.863516 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 9 00:10:55.863522 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 9 00:10:55.863527 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:10:55.863533 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 00:10:55.863538 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 00:10:55.863544 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 00:10:55.863550 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:10:55.863555 kernel: ... version: 0 May 9 00:10:55.863561 kernel: ... bit width: 48 May 9 00:10:55.863568 kernel: ... generic registers: 6 May 9 00:10:55.863573 kernel: ... value mask: 0000ffffffffffff May 9 00:10:55.863579 kernel: ... max period: 00007fffffffffff May 9 00:10:55.863584 kernel: ... fixed-purpose events: 0 May 9 00:10:55.863590 kernel: ... event mask: 000000000000003f May 9 00:10:55.863596 kernel: signal: max sigframe size: 1776 May 9 00:10:55.863601 kernel: rcu: Hierarchical SRCU implementation. May 9 00:10:55.863608 kernel: rcu: Max phase no-delay instances is 400. May 9 00:10:55.863614 kernel: smp: Bringing up secondary CPUs ... May 9 00:10:55.863638 kernel: smpboot: x86: Booting SMP configuration: May 9 00:10:55.863649 kernel: .... node #0, CPUs: #1 May 9 00:10:55.863655 kernel: smp: Brought up 1 node, 2 CPUs May 9 00:10:55.863661 kernel: smpboot: Max logical packages: 1 May 9 00:10:55.863666 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) May 9 00:10:55.863672 kernel: devtmpfs: initialized May 9 00:10:55.863677 kernel: x86/mm: Memory block size: 128MB May 9 00:10:55.863683 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:10:55.863689 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 00:10:55.863694 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:10:55.863702 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:10:55.863708 kernel: audit: initializing netlink subsys (disabled) May 9 00:10:55.863713 kernel: audit: type=2000 audit(1746749454.575:1): state=initialized audit_enabled=0 res=1 May 9 00:10:55.863719 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:10:55.863724 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:10:55.863730 kernel: cpuidle: using governor menu May 9 00:10:55.863735 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:10:55.863741 kernel: dca service started, version 1.12.1 May 9 00:10:55.863747 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 9 00:10:55.863754 kernel: PCI: Using configuration type 1 for base access May 9 00:10:55.863759 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:10:55.863765 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:10:55.863771 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:10:55.863776 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:10:55.863782 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:10:55.863787 kernel: ACPI: Added _OSI(Module Device) May 9 00:10:55.863793 kernel: ACPI: Added _OSI(Processor Device) May 9 00:10:55.863800 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:10:55.863805 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:10:55.863811 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:10:55.863816 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:10:55.863823 kernel: ACPI: Interpreter enabled May 9 00:10:55.863838 kernel: ACPI: PM: (supports S0 S5) May 9 00:10:55.863852 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:10:55.863863 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:10:55.863874 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:10:55.863885 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:10:55.863894 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:10:55.864012 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:10:55.864113 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:10:55.864177 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:10:55.864186 kernel: PCI host bridge to bus 0000:00 May 9 00:10:55.864250 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:10:55.864354 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:10:55.864429 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:10:55.864483 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 9 00:10:55.864536 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 00:10:55.864600 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 9 00:10:55.864674 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:10:55.864750 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:10:55.864836 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 May 9 00:10:55.864938 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] May 9 00:10:55.865003 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] May 9 00:10:55.865065 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] May 9 00:10:55.865126 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] May 9 00:10:55.865187 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:10:55.865255 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 9 00:10:55.865378 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] May 9 00:10:55.865461 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 9 00:10:55.865524 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] May 9 00:10:55.865591 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 9 00:10:55.865681 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] May 9 00:10:55.865750 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 9 00:10:55.865840 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] May 9 00:10:55.865953 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 9 00:10:55.866031 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] May 9 00:10:55.866101 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 9 00:10:55.866162 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] May 9 00:10:55.866229 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 9 00:10:55.866297 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] May 9 00:10:55.866425 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 9 00:10:55.866490 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] May 9 00:10:55.866557 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 9 00:10:55.866618 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] May 9 00:10:55.866709 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:10:55.866776 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:10:55.866869 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:10:55.866951 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] May 9 00:10:55.867012 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] May 9 00:10:55.867076 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:10:55.867138 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 9 00:10:55.867231 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 9 00:10:55.867296 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] May 9 00:10:55.867447 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 9 00:10:55.867513 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] May 9 00:10:55.867575 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 9 00:10:55.867665 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 9 00:10:55.867754 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 9 00:10:55.867828 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 9 00:10:55.867942 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] May 9 00:10:55.868007 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 9 00:10:55.868069 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 9 00:10:55.868128 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 9 00:10:55.868197 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 9 00:10:55.868262 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] May 9 00:10:55.870430 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] May 9 00:10:55.870522 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 9 00:10:55.870603 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 9 00:10:55.870713 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 9 00:10:55.870807 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 9 00:10:55.870917 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 9 00:10:55.870987 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 9 00:10:55.871053 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 9 00:10:55.871113 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 9 00:10:55.871181 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 9 00:10:55.871244 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] May 9 00:10:55.872465 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] May 9 00:10:55.872548 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 9 00:10:55.872612 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 9 00:10:55.872697 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 9 00:10:55.872785 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 9 00:10:55.872897 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] May 9 00:10:55.872971 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] May 9 00:10:55.873034 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 9 00:10:55.873095 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 9 00:10:55.873154 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 9 00:10:55.873163 kernel: acpiphp: Slot [0] registered May 9 00:10:55.873235 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 9 00:10:55.873314 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] May 9 00:10:55.873433 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] May 9 00:10:55.873501 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] May 9 00:10:55.873562 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 9 00:10:55.873637 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 9 00:10:55.873702 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 9 00:10:55.873715 kernel: acpiphp: Slot [0-2] registered May 9 00:10:55.873775 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 9 00:10:55.873849 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 9 00:10:55.873941 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 9 00:10:55.873950 kernel: acpiphp: Slot [0-3] registered May 9 00:10:55.874026 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 9 00:10:55.874091 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 9 00:10:55.874150 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 9 00:10:55.874159 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:10:55.874168 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:10:55.874174 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:10:55.874180 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:10:55.874186 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:10:55.874191 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:10:55.874197 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:10:55.874203 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:10:55.874208 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:10:55.874214 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:10:55.874221 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:10:55.874227 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:10:55.874233 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:10:55.874238 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:10:55.874244 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:10:55.874250 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:10:55.874255 kernel: iommu: Default domain type: Translated May 9 00:10:55.874261 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:10:55.874266 kernel: PCI: Using ACPI for IRQ routing May 9 00:10:55.874273 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:10:55.874279 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 9 00:10:55.874284 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 9 00:10:55.874963 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:10:55.875035 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:10:55.875096 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:10:55.875105 kernel: vgaarb: loaded May 9 00:10:55.875111 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:10:55.875121 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:10:55.875127 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:10:55.875132 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:10:55.875138 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:10:55.875144 kernel: pnp: PnP ACPI init May 9 00:10:55.875210 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 9 00:10:55.875220 kernel: pnp: PnP ACPI: found 5 devices May 9 00:10:55.875226 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:10:55.875232 kernel: NET: Registered PF_INET protocol family May 9 00:10:55.875240 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:10:55.875246 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 9 00:10:55.875252 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:10:55.875258 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 9 00:10:55.875264 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 9 00:10:55.875269 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 9 00:10:55.875275 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 9 00:10:55.875280 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 9 00:10:55.875287 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:10:55.875296 kernel: NET: Registered PF_XDP protocol family May 9 00:10:55.875422 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 9 00:10:55.875490 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 9 00:10:55.875551 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 9 00:10:55.875611 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] May 9 00:10:55.875702 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] May 9 00:10:55.875764 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] May 9 00:10:55.875834 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 9 00:10:55.875937 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 9 00:10:55.876016 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 9 00:10:55.876078 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 9 00:10:55.876138 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 9 00:10:55.876198 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 9 00:10:55.876257 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 9 00:10:55.876408 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 9 00:10:55.876485 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 9 00:10:55.876546 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 9 00:10:55.876607 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 9 00:10:55.876689 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 9 00:10:55.876784 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 9 00:10:55.876887 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 9 00:10:55.876962 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 9 00:10:55.877036 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 9 00:10:55.877101 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 9 00:10:55.877161 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 9 00:10:55.877220 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 9 00:10:55.877278 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 9 00:10:55.877408 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 9 00:10:55.877533 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 9 00:10:55.877611 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 9 00:10:55.877698 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 9 00:10:55.877761 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 9 00:10:55.877839 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 9 00:10:55.877934 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 9 00:10:55.877997 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 9 00:10:55.878072 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 9 00:10:55.878145 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 9 00:10:55.878202 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:10:55.878255 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:10:55.878410 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:10:55.878473 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 9 00:10:55.878527 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 9 00:10:55.878598 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 9 00:10:55.878687 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 9 00:10:55.878747 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 9 00:10:55.878808 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 9 00:10:55.878909 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 9 00:10:55.878978 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 9 00:10:55.879040 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 9 00:10:55.879101 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 9 00:10:55.879157 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 9 00:10:55.879218 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 9 00:10:55.879275 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 9 00:10:55.879456 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 9 00:10:55.879546 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 9 00:10:55.879634 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 9 00:10:55.879713 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 9 00:10:55.879772 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 9 00:10:55.879834 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 9 00:10:55.879936 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 9 00:10:55.880000 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 9 00:10:55.880072 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 9 00:10:55.880129 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 9 00:10:55.880186 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 9 00:10:55.880195 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:10:55.880203 kernel: PCI: CLS 0 bytes, default 64 May 9 00:10:55.880209 kernel: Initialise system trusted keyrings May 9 00:10:55.880215 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 9 00:10:55.880222 kernel: Key type asymmetric registered May 9 00:10:55.880231 kernel: Asymmetric key parser 'x509' registered May 9 00:10:55.880237 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:10:55.880244 kernel: io scheduler mq-deadline registered May 9 00:10:55.880249 kernel: io scheduler kyber registered May 9 00:10:55.880255 kernel: io scheduler bfq registered May 9 00:10:55.880392 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 9 00:10:55.880483 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 9 00:10:55.880553 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 9 00:10:55.880631 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 9 00:10:55.880708 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 9 00:10:55.880774 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 9 00:10:55.880842 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 9 00:10:55.880948 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 9 00:10:55.881016 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 9 00:10:55.881082 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 9 00:10:55.881148 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 9 00:10:55.881211 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 9 00:10:55.881290 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 9 00:10:55.881427 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 9 00:10:55.881496 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 9 00:10:55.881559 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 9 00:10:55.881568 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:10:55.881645 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 9 00:10:55.881714 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 9 00:10:55.881724 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:10:55.881735 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 9 00:10:55.881741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:10:55.881747 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:10:55.881753 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:10:55.881759 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:10:55.881766 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:10:55.881772 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:10:55.881838 kernel: rtc_cmos 00:03: RTC can wake from S4 May 9 00:10:55.881941 kernel: rtc_cmos 00:03: registered as rtc0 May 9 00:10:55.882014 kernel: rtc_cmos 00:03: setting system clock to 2025-05-09T00:10:55 UTC (1746749455) May 9 00:10:55.882072 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 9 00:10:55.882081 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:10:55.882087 kernel: NET: Registered PF_INET6 protocol family May 9 00:10:55.882093 kernel: Segment Routing with IPv6 May 9 00:10:55.882099 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:10:55.882105 kernel: NET: Registered PF_PACKET protocol family May 9 00:10:55.882111 kernel: Key type dns_resolver registered May 9 00:10:55.882121 kernel: IPI shorthand broadcast: enabled May 9 00:10:55.882127 kernel: sched_clock: Marking stable (1101010518, 133950267)->(1245867378, -10906593) May 9 00:10:55.882133 kernel: registered taskstats version 1 May 9 00:10:55.882139 kernel: Loading compiled-in X.509 certificates May 9 00:10:55.882145 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: eadd5f695247828f81e51397e7264f8efd327b51' May 9 00:10:55.882151 kernel: Key type .fscrypt registered May 9 00:10:55.882157 kernel: Key type fscrypt-provisioning registered May 9 00:10:55.882163 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:10:55.882169 kernel: ima: Allocated hash algorithm: sha1 May 9 00:10:55.882176 kernel: ima: No architecture policies found May 9 00:10:55.882182 kernel: clk: Disabling unused clocks May 9 00:10:55.882188 kernel: Freeing unused kernel image (initmem) memory: 43000K May 9 00:10:55.882195 kernel: Write protecting the kernel read-only data: 36864k May 9 00:10:55.882201 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 9 00:10:55.882207 kernel: Run /init as init process May 9 00:10:55.882213 kernel: with arguments: May 9 00:10:55.882219 kernel: /init May 9 00:10:55.882225 kernel: with environment: May 9 00:10:55.882233 kernel: HOME=/ May 9 00:10:55.882239 kernel: TERM=linux May 9 00:10:55.882245 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:10:55.882253 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:10:55.882261 systemd[1]: Detected virtualization kvm. May 9 00:10:55.882268 systemd[1]: Detected architecture x86-64. May 9 00:10:55.882274 systemd[1]: Running in initrd. May 9 00:10:55.882280 systemd[1]: No hostname configured, using default hostname. May 9 00:10:55.882288 systemd[1]: Hostname set to . May 9 00:10:55.882294 systemd[1]: Initializing machine ID from VM UUID. May 9 00:10:55.882409 systemd[1]: Queued start job for default target initrd.target. May 9 00:10:55.882419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:10:55.882428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:10:55.882435 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:10:55.882442 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:10:55.882448 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:10:55.882458 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:10:55.882466 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:10:55.882472 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:10:55.882479 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:10:55.882485 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:10:55.882492 systemd[1]: Reached target paths.target - Path Units. May 9 00:10:55.882500 systemd[1]: Reached target slices.target - Slice Units. May 9 00:10:55.882506 systemd[1]: Reached target swap.target - Swaps. May 9 00:10:55.882512 systemd[1]: Reached target timers.target - Timer Units. May 9 00:10:55.882519 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:10:55.882525 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:10:55.882532 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:10:55.882538 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:10:55.882545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:10:55.882551 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:10:55.882559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:10:55.882565 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:10:55.882571 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:10:55.882577 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:10:55.882584 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:10:55.882590 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:10:55.882596 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:10:55.882602 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:10:55.882609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:10:55.882616 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:10:55.882634 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:10:55.882640 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:10:55.882647 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:10:55.882672 systemd-journald[188]: Collecting audit messages is disabled. May 9 00:10:55.882689 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:10:55.882697 systemd-journald[188]: Journal started May 9 00:10:55.882714 systemd-journald[188]: Runtime Journal (/run/log/journal/1333148516874e1da1c7639916dc6dc2) is 4.8M, max 38.4M, 33.6M free. May 9 00:10:55.875341 systemd-modules-load[189]: Inserted module 'overlay' May 9 00:10:55.921882 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:10:55.921907 kernel: Bridge firewalling registered May 9 00:10:55.921915 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:10:55.901077 systemd-modules-load[189]: Inserted module 'br_netfilter' May 9 00:10:55.922487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:10:55.923446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:10:55.928512 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:10:55.930314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:10:55.934407 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:10:55.938400 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:10:55.943479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:10:55.950567 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:10:55.951321 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:10:55.954413 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:10:55.956704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:10:55.962410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:10:55.968914 dracut-cmdline[222]: dracut-dracut-053 May 9 00:10:55.971212 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:10:55.985000 systemd-resolved[224]: Positive Trust Anchors: May 9 00:10:55.985333 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:10:55.985358 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:10:55.994263 systemd-resolved[224]: Defaulting to hostname 'linux'. May 9 00:10:55.995073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:10:55.995768 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:10:56.023355 kernel: SCSI subsystem initialized May 9 00:10:56.031327 kernel: Loading iSCSI transport class v2.0-870. May 9 00:10:56.040328 kernel: iscsi: registered transport (tcp) May 9 00:10:56.057928 kernel: iscsi: registered transport (qla4xxx) May 9 00:10:56.057975 kernel: QLogic iSCSI HBA Driver May 9 00:10:56.089571 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:10:56.094488 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:10:56.113330 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:10:56.113384 kernel: device-mapper: uevent: version 1.0.3 May 9 00:10:56.113397 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:10:56.149346 kernel: raid6: avx2x4 gen() 34185 MB/s May 9 00:10:56.166324 kernel: raid6: avx2x2 gen() 30572 MB/s May 9 00:10:56.183474 kernel: raid6: avx2x1 gen() 25962 MB/s May 9 00:10:56.183503 kernel: raid6: using algorithm avx2x4 gen() 34185 MB/s May 9 00:10:56.201524 kernel: raid6: .... xor() 5536 MB/s, rmw enabled May 9 00:10:56.201545 kernel: raid6: using avx2x2 recovery algorithm May 9 00:10:56.219450 kernel: xor: automatically using best checksumming function avx May 9 00:10:56.339361 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:10:56.352406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:10:56.359557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:10:56.369568 systemd-udevd[407]: Using default interface naming scheme 'v255'. May 9 00:10:56.372683 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:10:56.382598 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:10:56.399053 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation May 9 00:10:56.431422 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:10:56.440424 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:10:56.486024 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:10:56.494384 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:10:56.502720 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:10:56.504169 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:10:56.505400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:10:56.505942 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:10:56.513404 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:10:56.523797 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:10:56.564353 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:10:56.566398 kernel: scsi host0: Virtio SCSI HBA May 9 00:10:56.614264 kernel: ACPI: bus type USB registered May 9 00:10:56.614279 kernel: usbcore: registered new interface driver usbfs May 9 00:10:56.614287 kernel: usbcore: registered new interface driver hub May 9 00:10:56.614295 kernel: usbcore: registered new device driver usb May 9 00:10:56.614320 kernel: libata version 3.00 loaded. May 9 00:10:56.622334 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:10:56.626478 kernel: AES CTR mode by8 optimization enabled May 9 00:10:56.626991 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:10:56.633387 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 9 00:10:56.633522 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 9 00:10:56.633608 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 9 00:10:56.633708 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:10:56.633794 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:10:56.627232 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:10:56.639126 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:10:56.639234 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:10:56.639345 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 9 00:10:56.631857 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:10:56.670813 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 9 00:10:56.671362 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 9 00:10:56.671456 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 9 00:10:56.671476 kernel: scsi host1: ahci May 9 00:10:56.671566 kernel: hub 1-0:1.0: USB hub found May 9 00:10:56.671679 kernel: hub 1-0:1.0: 4 ports detected May 9 00:10:56.671760 kernel: scsi host2: ahci May 9 00:10:56.671847 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 9 00:10:56.671933 kernel: scsi host3: ahci May 9 00:10:56.672007 kernel: hub 2-0:1.0: USB hub found May 9 00:10:56.672091 kernel: hub 2-0:1.0: 4 ports detected May 9 00:10:56.672171 kernel: scsi host4: ahci May 9 00:10:56.672247 kernel: scsi host5: ahci May 9 00:10:56.672342 kernel: scsi host6: ahci May 9 00:10:56.672418 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 May 9 00:10:56.672426 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 May 9 00:10:56.672434 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 May 9 00:10:56.672441 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 May 9 00:10:56.672451 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 May 9 00:10:56.672458 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 May 9 00:10:56.632327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:10:56.632424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:10:56.632886 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:10:56.647080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:10:56.720380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:10:56.726467 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:10:56.740179 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:10:56.883404 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 9 00:10:56.961795 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 9 00:10:56.961890 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:10:56.964273 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:10:56.967349 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:10:56.967410 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:10:56.971322 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:10:56.973881 kernel: ata1.00: applying bridge limits May 9 00:10:56.976315 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:10:56.977349 kernel: ata1.00: configured for UDMA/100 May 9 00:10:56.979346 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:10:57.035338 kernel: sd 0:0:0:0: Power-on or device reset occurred May 9 00:10:57.038340 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 9 00:10:57.040136 kernel: sd 0:0:0:0: [sda] Write Protect is off May 9 00:10:57.040319 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 00:10:57.040332 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 9 00:10:57.042921 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:10:57.043131 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 9 00:10:57.043271 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:10:57.057325 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:10:57.057358 kernel: GPT:17805311 != 80003071 May 9 00:10:57.057367 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:10:57.059357 kernel: GPT:17805311 != 80003071 May 9 00:10:57.060461 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:10:57.061474 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 00:10:57.065321 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 9 00:10:57.067500 kernel: usbcore: registered new interface driver usbhid May 9 00:10:57.067521 kernel: usbhid: USB HID core driver May 9 00:10:57.070547 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 9 00:10:57.070679 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 May 9 00:10:57.075335 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 9 00:10:57.103318 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (459) May 9 00:10:57.111317 kernel: BTRFS: device fsid cea98156-267a-4592-a459-5921031522cf devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (469) May 9 00:10:57.110381 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 9 00:10:57.117937 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 9 00:10:57.122426 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 9 00:10:57.127721 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 9 00:10:57.128346 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 9 00:10:57.135397 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:10:57.141235 disk-uuid[585]: Primary Header is updated. May 9 00:10:57.141235 disk-uuid[585]: Secondary Entries is updated. May 9 00:10:57.141235 disk-uuid[585]: Secondary Header is updated. May 9 00:10:57.146325 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 00:10:58.161350 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 9 00:10:58.163486 disk-uuid[587]: The operation has completed successfully. May 9 00:10:58.225741 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:10:58.225834 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:10:58.242462 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:10:58.247972 sh[603]: Success May 9 00:10:58.266421 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:10:58.325832 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:10:58.335386 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:10:58.337374 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:10:58.375923 kernel: BTRFS info (device dm-0): first mount of filesystem cea98156-267a-4592-a459-5921031522cf May 9 00:10:58.375970 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:10:58.379441 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:10:58.384770 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:10:58.384796 kernel: BTRFS info (device dm-0): using free space tree May 9 00:10:58.398324 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 9 00:10:58.401432 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:10:58.403119 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:10:58.409478 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:10:58.412421 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:10:58.433321 kernel: BTRFS info (device sda6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:10:58.433389 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:10:58.433413 kernel: BTRFS info (device sda6): using free space tree May 9 00:10:58.439444 kernel: BTRFS info (device sda6): enabling ssd optimizations May 9 00:10:58.439483 kernel: BTRFS info (device sda6): auto enabling async discard May 9 00:10:58.450795 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:10:58.456483 kernel: BTRFS info (device sda6): last unmount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:10:58.457241 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:10:58.466497 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:10:58.498556 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:10:58.506430 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:10:58.525010 systemd-networkd[784]: lo: Link UP May 9 00:10:58.525018 systemd-networkd[784]: lo: Gained carrier May 9 00:10:58.527086 systemd-networkd[784]: Enumeration completed May 9 00:10:58.527730 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:10:58.528059 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:10:58.528062 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:10:58.528282 systemd[1]: Reached target network.target - Network. May 9 00:10:58.531957 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:10:58.531960 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:10:58.532451 systemd-networkd[784]: eth0: Link UP May 9 00:10:58.532454 systemd-networkd[784]: eth0: Gained carrier May 9 00:10:58.532460 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:10:58.536628 systemd-networkd[784]: eth1: Link UP May 9 00:10:58.536634 systemd-networkd[784]: eth1: Gained carrier May 9 00:10:58.536640 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:10:58.544866 ignition[725]: Ignition 2.20.0 May 9 00:10:58.544879 ignition[725]: Stage: fetch-offline May 9 00:10:58.546211 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:10:58.544925 ignition[725]: no configs at "/usr/lib/ignition/base.d" May 9 00:10:58.544939 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 9 00:10:58.545032 ignition[725]: parsed url from cmdline: "" May 9 00:10:58.545035 ignition[725]: no config URL provided May 9 00:10:58.545039 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:10:58.545046 ignition[725]: no config at "/usr/lib/ignition/user.ign" May 9 00:10:58.545050 ignition[725]: failed to fetch config: resource requires networking May 9 00:10:58.545266 ignition[725]: Ignition finished successfully May 9 00:10:58.558422 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 00:10:58.560347 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:10:58.569784 ignition[792]: Ignition 2.20.0 May 9 00:10:58.570333 ignition[792]: Stage: fetch May 9 00:10:58.570468 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 9 00:10:58.570476 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 9 00:10:58.570545 ignition[792]: parsed url from cmdline: "" May 9 00:10:58.570548 ignition[792]: no config URL provided May 9 00:10:58.570552 ignition[792]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:10:58.570558 ignition[792]: no config at "/usr/lib/ignition/user.ign" May 9 00:10:58.570576 ignition[792]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 9 00:10:58.570735 ignition[792]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 9 00:10:58.595350 systemd-networkd[784]: eth0: DHCPv4 address 157.180.45.97/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 9 00:10:58.771896 ignition[792]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 9 00:10:58.779547 ignition[792]: GET result: OK May 9 00:10:58.779621 ignition[792]: parsing config with SHA512: b8c736e52899753b44d42b691082647d2642cd735f23f969560dc500140cb7f51980c6ffd1132aedb4f3f493a4b1f8f7faaf5a943385cd1a2711772e9087148f May 9 00:10:58.786724 unknown[792]: fetched base config from "system" May 9 00:10:58.786748 unknown[792]: fetched base config from "system" May 9 00:10:58.787373 ignition[792]: fetch: fetch complete May 9 00:10:58.786757 unknown[792]: fetched user config from "hetzner" May 9 00:10:58.787382 ignition[792]: fetch: fetch passed May 9 00:10:58.787442 ignition[792]: Ignition finished successfully May 9 00:10:58.790806 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 00:10:58.804582 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:10:58.816901 ignition[799]: Ignition 2.20.0 May 9 00:10:58.816917 ignition[799]: Stage: kargs May 9 00:10:58.817085 ignition[799]: no configs at "/usr/lib/ignition/base.d" May 9 00:10:58.819152 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:10:58.817096 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 9 00:10:58.817990 ignition[799]: kargs: kargs passed May 9 00:10:58.818030 ignition[799]: Ignition finished successfully May 9 00:10:58.838501 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:10:58.855395 ignition[806]: Ignition 2.20.0 May 9 00:10:58.855414 ignition[806]: Stage: disks May 9 00:10:58.855627 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 9 00:10:58.858775 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:10:58.855642 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 9 00:10:58.860686 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:10:58.856757 ignition[806]: disks: disks passed May 9 00:10:58.862464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:10:58.856807 ignition[806]: Ignition finished successfully May 9 00:10:58.864580 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:10:58.866574 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:10:58.868174 systemd[1]: Reached target basic.target - Basic System. May 9 00:10:58.877409 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:10:58.893684 systemd-fsck[814]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 9 00:10:58.896657 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:10:58.905450 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:10:58.999675 kernel: EXT4-fs (sda9): mounted filesystem 61492938-2ced-4ec2-b593-fc96fa0fefcc r/w with ordered data mode. Quota mode: none. May 9 00:10:58.999993 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:10:59.000872 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:10:59.011383 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:10:59.015367 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:10:59.018418 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 9 00:10:59.019705 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:10:59.019729 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:10:59.022137 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:10:59.031444 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:10:59.044076 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (822) May 9 00:10:59.044098 kernel: BTRFS info (device sda6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:10:59.044107 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:10:59.044119 kernel: BTRFS info (device sda6): using free space tree May 9 00:10:59.053772 kernel: BTRFS info (device sda6): enabling ssd optimizations May 9 00:10:59.053797 kernel: BTRFS info (device sda6): auto enabling async discard May 9 00:10:59.058635 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:10:59.092076 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:10:59.093679 coreos-metadata[824]: May 09 00:10:59.093 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 9 00:10:59.095378 coreos-metadata[824]: May 09 00:10:59.095 INFO Fetch successful May 9 00:10:59.095968 coreos-metadata[824]: May 09 00:10:59.095 INFO wrote hostname ci-4152-2-3-n-8b48d2c086 to /sysroot/etc/hostname May 9 00:10:59.097795 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 00:10:59.099962 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory May 9 00:10:59.102268 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:10:59.106156 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:10:59.173915 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:10:59.179410 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:10:59.183422 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:10:59.185788 kernel: BTRFS info (device sda6): last unmount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:10:59.201161 ignition[938]: INFO : Ignition 2.20.0 May 9 00:10:59.202535 ignition[938]: INFO : Stage: mount May 9 00:10:59.202535 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:10:59.202535 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 9 00:10:59.205341 ignition[938]: INFO : mount: mount passed May 9 00:10:59.205341 ignition[938]: INFO : Ignition finished successfully May 9 00:10:59.205528 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:10:59.212480 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:10:59.213257 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:10:59.372148 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:10:59.378559 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:10:59.395428 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (952) May 9 00:10:59.400422 kernel: BTRFS info (device sda6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:10:59.400473 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:10:59.403419 kernel: BTRFS info (device sda6): using free space tree May 9 00:10:59.411535 kernel: BTRFS info (device sda6): enabling ssd optimizations May 9 00:10:59.411603 kernel: BTRFS info (device sda6): auto enabling async discard May 9 00:10:59.418328 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:10:59.446165 ignition[968]: INFO : Ignition 2.20.0 May 9 00:10:59.446165 ignition[968]: INFO : Stage: files May 9 00:10:59.448680 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:10:59.448680 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 9 00:10:59.448680 ignition[968]: DEBUG : files: compiled without relabeling support, skipping May 9 00:10:59.452955 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:10:59.452955 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:10:59.455933 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:10:59.455933 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:10:59.455933 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:10:59.454559 unknown[968]: wrote ssh authorized keys file for user: core May 9 00:10:59.459951 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:10:59.459951 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 9 00:10:59.819734 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 00:11:00.374481 systemd-networkd[784]: eth0: Gained IPv6LL May 9 00:11:00.566592 systemd-networkd[784]: eth1: Gained IPv6LL May 9 00:11:02.706907 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:11:02.708592 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:11:02.708592 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 9 00:11:03.506143 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 00:11:03.929148 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:11:03.929148 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:11:03.933070 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 9 00:11:04.519056 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 00:11:04.686943 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:11:04.686943 ignition[968]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 00:11:04.690662 ignition[968]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:11:04.690662 ignition[968]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:11:04.690662 ignition[968]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 00:11:04.690662 ignition[968]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 9 00:11:04.690662 ignition[968]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 9 00:11:04.690662 ignition[968]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 9 00:11:04.690662 ignition[968]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 9 00:11:04.690662 ignition[968]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 9 00:11:04.690662 ignition[968]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:11:04.690662 ignition[968]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:11:04.690662 ignition[968]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:11:04.690662 ignition[968]: INFO : files: files passed May 9 00:11:04.690662 ignition[968]: INFO : Ignition finished successfully May 9 00:11:04.689290 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:11:04.700438 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:11:04.706399 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:11:04.712272 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:11:04.712460 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:11:04.720051 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:11:04.720051 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:11:04.721686 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:11:04.722919 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:11:04.724648 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:11:04.739482 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:11:04.771859 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:11:04.772004 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:11:04.773686 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:11:04.775340 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:11:04.776916 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:11:04.788475 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:11:04.806031 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:11:04.819500 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:11:04.833771 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:11:04.835113 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:11:04.837223 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:11:04.839074 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:11:04.839342 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:11:04.841492 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:11:04.842888 systemd[1]: Stopped target basic.target - Basic System. May 9 00:11:04.844831 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:11:04.846641 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:11:04.848407 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:11:04.850440 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:11:04.852419 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:11:04.854473 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:11:04.856403 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:11:04.858400 systemd[1]: Stopped target swap.target - Swaps. May 9 00:11:04.859917 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:11:04.860049 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:11:04.862038 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:11:04.863186 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:11:04.864791 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:11:04.866424 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:11:04.867678 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:11:04.867799 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:11:04.870041 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:11:04.870183 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:11:04.871336 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:11:04.871501 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:11:04.873078 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 9 00:11:04.873197 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 9 00:11:04.880568 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:11:04.883553 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:11:04.883858 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:11:04.893448 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:11:04.897411 ignition[1022]: INFO : Ignition 2.20.0 May 9 00:11:04.897411 ignition[1022]: INFO : Stage: umount May 9 00:11:04.897411 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:11:04.897411 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 9 00:11:04.897411 ignition[1022]: INFO : umount: umount passed May 9 00:11:04.897411 ignition[1022]: INFO : Ignition finished successfully May 9 00:11:04.896248 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:11:04.896376 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:11:04.896940 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:11:04.897025 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:11:04.906915 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:11:04.907346 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:11:04.907443 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:11:04.909209 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:11:04.909292 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:11:04.909997 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:11:04.910067 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:11:04.912437 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:11:04.912476 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:11:04.913623 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:11:04.913657 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:11:04.914526 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 00:11:04.914564 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 00:11:04.915473 systemd[1]: Stopped target network.target - Network. May 9 00:11:04.916442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:11:04.916479 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:11:04.917398 systemd[1]: Stopped target paths.target - Path Units. May 9 00:11:04.918250 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:11:04.918501 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:11:04.919239 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:11:04.920141 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:11:04.921079 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:11:04.921107 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:11:04.921989 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:11:04.922013 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:11:04.922897 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:11:04.922928 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:11:04.923793 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:11:04.923824 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:11:04.924760 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:11:04.924789 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:11:04.925785 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:11:04.926794 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:11:04.929357 systemd-networkd[784]: eth0: DHCPv6 lease lost May 9 00:11:04.934340 systemd-networkd[784]: eth1: DHCPv6 lease lost May 9 00:11:04.935897 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:11:04.935968 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:11:04.936782 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:11:04.936866 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:11:04.939631 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:11:04.939664 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:11:04.945406 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:11:04.946603 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:11:04.946655 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:11:04.947145 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:11:04.947175 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:11:04.947673 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:11:04.947704 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:11:04.948628 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:11:04.948661 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:11:04.949732 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:11:04.957156 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:11:04.957249 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:11:04.958405 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:11:04.958507 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:11:04.959778 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:11:04.959824 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:11:04.960757 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:11:04.960788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:11:04.961849 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:11:04.961894 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:11:04.963325 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:11:04.963359 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:11:04.964411 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:11:04.964461 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:11:04.977421 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:11:04.977925 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:11:04.977971 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:11:04.978479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:11:04.978510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:04.981917 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:11:04.981990 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:11:04.982888 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:11:04.985395 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:11:04.992676 systemd[1]: Switching root. May 9 00:11:05.052934 systemd-journald[188]: Journal stopped May 9 00:11:05.933850 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). May 9 00:11:05.933904 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:11:05.933915 kernel: SELinux: policy capability open_perms=1 May 9 00:11:05.933923 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:11:05.933930 kernel: SELinux: policy capability always_check_network=0 May 9 00:11:05.933937 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:11:05.933946 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:11:05.933953 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:11:05.933963 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:11:05.933972 kernel: audit: type=1403 audit(1746749465.215:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:11:05.933983 systemd[1]: Successfully loaded SELinux policy in 48.465ms. May 9 00:11:05.934000 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.089ms. May 9 00:11:05.934009 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:11:05.934017 systemd[1]: Detected virtualization kvm. May 9 00:11:05.934025 systemd[1]: Detected architecture x86-64. May 9 00:11:05.934032 systemd[1]: Detected first boot. May 9 00:11:05.934041 systemd[1]: Hostname set to . May 9 00:11:05.934052 systemd[1]: Initializing machine ID from VM UUID. May 9 00:11:05.934611 zram_generator::config[1065]: No configuration found. May 9 00:11:05.934635 systemd[1]: Populated /etc with preset unit settings. May 9 00:11:05.934646 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:11:05.934654 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:11:05.934665 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:11:05.934674 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:11:05.934682 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:11:05.934690 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:11:05.934701 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:11:05.934713 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:11:05.934730 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:11:05.934746 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:11:05.934762 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:11:05.934777 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:11:05.934790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:11:05.934804 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:11:05.934824 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:11:05.934840 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:11:05.934856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:11:05.934872 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:11:05.934889 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:11:05.934906 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:11:05.934915 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:11:05.934926 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:11:05.934934 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:11:05.934942 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:11:05.934950 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:11:05.934958 systemd[1]: Reached target slices.target - Slice Units. May 9 00:11:05.934967 systemd[1]: Reached target swap.target - Swaps. May 9 00:11:05.934975 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:11:05.934983 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:11:05.934991 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:11:05.935001 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:11:05.935010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:11:05.935017 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:11:05.935025 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:11:05.935033 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:11:05.935041 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:11:05.935050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:05.935062 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:11:05.935071 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:11:05.935079 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:11:05.935088 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:11:05.935096 systemd[1]: Reached target machines.target - Containers. May 9 00:11:05.935106 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:11:05.935121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:11:05.935140 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:11:05.935156 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:11:05.935172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:11:05.935188 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:11:05.935205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:11:05.935220 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:11:05.935236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:11:05.935254 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:11:05.935274 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:11:05.935290 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:11:05.936335 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:11:05.936351 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:11:05.936360 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:11:05.936368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:11:05.936376 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:11:05.936384 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:11:05.936393 kernel: fuse: init (API version 7.39) May 9 00:11:05.936404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:11:05.936412 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:11:05.936420 systemd[1]: Stopped verity-setup.service. May 9 00:11:05.936429 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:05.936437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:11:05.936445 kernel: ACPI: bus type drm_connector registered May 9 00:11:05.936453 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:11:05.936461 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:11:05.936470 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:11:05.936478 kernel: loop: module loaded May 9 00:11:05.936501 systemd-journald[1152]: Collecting audit messages is disabled. May 9 00:11:05.936531 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:11:05.936543 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:11:05.936552 systemd-journald[1152]: Journal started May 9 00:11:05.936570 systemd-journald[1152]: Runtime Journal (/run/log/journal/1333148516874e1da1c7639916dc6dc2) is 4.8M, max 38.4M, 33.6M free. May 9 00:11:05.663654 systemd[1]: Queued start job for default target multi-user.target. May 9 00:11:05.682579 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 9 00:11:05.683081 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:11:05.938338 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:11:05.938600 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:11:05.939490 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:11:05.940159 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:11:05.940466 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:11:05.941165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:11:05.941434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:11:05.942128 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:11:05.942326 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:11:05.942998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:11:05.943134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:11:05.943980 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:11:05.944127 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:11:05.945628 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:11:05.945719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:11:05.946360 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:11:05.948451 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:11:05.949549 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:11:05.961024 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:11:05.967723 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:11:05.972448 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:11:05.973055 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:11:05.973095 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:11:05.975200 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:11:05.987415 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:11:05.994603 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:11:05.995577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:11:05.998530 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:11:06.001626 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:11:06.002128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:11:06.004987 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:11:06.005973 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:11:06.009422 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:11:06.011784 systemd-journald[1152]: Time spent on flushing to /var/log/journal/1333148516874e1da1c7639916dc6dc2 is 30.352ms for 1129 entries. May 9 00:11:06.011784 systemd-journald[1152]: System Journal (/var/log/journal/1333148516874e1da1c7639916dc6dc2) is 8.0M, max 584.8M, 576.8M free. May 9 00:11:06.055922 systemd-journald[1152]: Received client request to flush runtime journal. May 9 00:11:06.055964 kernel: loop0: detected capacity change from 0 to 138184 May 9 00:11:06.013122 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:11:06.014990 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:11:06.021565 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:11:06.031565 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:11:06.032819 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:11:06.033803 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:11:06.042474 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:11:06.046184 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:11:06.056826 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:11:06.057690 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:11:06.061119 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:11:06.069051 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:11:06.077289 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 00:11:06.092151 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:11:06.092684 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:11:06.094756 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:11:06.096761 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:11:06.107594 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:11:06.119367 kernel: loop1: detected capacity change from 0 to 140992 May 9 00:11:06.134453 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 9 00:11:06.134471 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 9 00:11:06.138472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:11:06.168335 kernel: loop2: detected capacity change from 0 to 210664 May 9 00:11:06.220530 kernel: loop3: detected capacity change from 0 to 8 May 9 00:11:06.239333 kernel: loop4: detected capacity change from 0 to 138184 May 9 00:11:06.269341 kernel: loop5: detected capacity change from 0 to 140992 May 9 00:11:06.287445 kernel: loop6: detected capacity change from 0 to 210664 May 9 00:11:06.319351 kernel: loop7: detected capacity change from 0 to 8 May 9 00:11:06.322812 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 9 00:11:06.323570 (sd-merge)[1210]: Merged extensions into '/usr'. May 9 00:11:06.330824 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:11:06.330847 systemd[1]: Reloading... May 9 00:11:06.411494 zram_generator::config[1236]: No configuration found. May 9 00:11:06.519473 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:11:06.556600 systemd[1]: Reloading finished in 225 ms. May 9 00:11:06.557071 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:11:06.581381 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:11:06.582505 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:11:06.593719 systemd[1]: Starting ensure-sysext.service... May 9 00:11:06.595388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:11:06.604554 systemd[1]: Reloading requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... May 9 00:11:06.604630 systemd[1]: Reloading... May 9 00:11:06.621530 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:11:06.623584 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:11:06.624153 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:11:06.624385 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 9 00:11:06.624493 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 9 00:11:06.629711 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:11:06.629784 systemd-tmpfiles[1280]: Skipping /boot May 9 00:11:06.638145 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:11:06.642054 systemd-tmpfiles[1280]: Skipping /boot May 9 00:11:06.664324 zram_generator::config[1304]: No configuration found. May 9 00:11:06.749731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:11:06.787345 systemd[1]: Reloading finished in 182 ms. May 9 00:11:06.800414 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:11:06.805625 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:11:06.811635 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:11:06.814470 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:11:06.819384 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:11:06.827524 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:11:06.829779 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:11:06.833483 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:11:06.836871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:06.836993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:11:06.843506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:11:06.847285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:11:06.850069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:11:06.850738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:11:06.853499 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:11:06.853959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:06.855197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:11:06.855385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:11:06.856670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:11:06.857236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:11:06.858500 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:11:06.858973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:11:06.871380 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:06.871551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:11:06.876695 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:11:06.878521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:11:06.880499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:11:06.881218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:11:06.882468 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:06.883381 systemd-udevd[1362]: Using default interface naming scheme 'v255'. May 9 00:11:06.883382 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:11:06.894530 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:11:06.896837 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:11:06.897943 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:11:06.898053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:11:06.900005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:11:06.900106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:11:06.902040 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:11:06.902336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:11:06.912780 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:06.912957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:11:06.919254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:11:06.920717 augenrules[1391]: No rules May 9 00:11:06.926520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:11:06.928226 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:11:06.931573 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:11:06.932101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:11:06.932212 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:06.933987 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:11:06.934668 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:11:06.935931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:11:06.937783 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:11:06.938617 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:11:06.938716 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:11:06.940143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:11:06.940246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:11:06.941842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:11:06.941946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:11:06.952977 systemd[1]: Finished ensure-sysext.service. May 9 00:11:06.953682 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:11:06.955584 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:11:06.956708 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:11:06.956813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:11:06.974972 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:11:06.975565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:11:06.975634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:11:06.977353 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:11:06.978364 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:11:07.015223 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 00:11:07.048639 systemd-resolved[1355]: Positive Trust Anchors: May 9 00:11:07.048651 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:11:07.048676 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:11:07.049381 systemd-networkd[1430]: lo: Link UP May 9 00:11:07.049394 systemd-networkd[1430]: lo: Gained carrier May 9 00:11:07.051455 systemd-networkd[1430]: Enumeration completed May 9 00:11:07.051546 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:11:07.053233 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:11:07.053244 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:11:07.054767 systemd-networkd[1430]: eth0: Link UP May 9 00:11:07.054774 systemd-networkd[1430]: eth0: Gained carrier May 9 00:11:07.054784 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:11:07.057733 systemd-resolved[1355]: Using system hostname 'ci-4152-2-3-n-8b48d2c086'. May 9 00:11:07.061468 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:11:07.062428 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:11:07.063380 systemd[1]: Reached target network.target - Network. May 9 00:11:07.063796 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:11:07.077166 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:11:07.078399 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:11:07.089018 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:11:07.091773 systemd-networkd[1430]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:11:07.091784 systemd-networkd[1430]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:11:07.092185 systemd-networkd[1430]: eth1: Link UP May 9 00:11:07.092195 systemd-networkd[1430]: eth1: Gained carrier May 9 00:11:07.092205 systemd-networkd[1430]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:11:07.096330 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 9 00:11:07.102351 kernel: ACPI: button: Power Button [PWRF] May 9 00:11:07.108324 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:11:07.113398 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1406) May 9 00:11:07.113477 systemd-networkd[1430]: eth0: DHCPv4 address 157.180.45.97/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 9 00:11:07.114120 systemd-timesyncd[1432]: Network configuration changed, trying to establish connection. May 9 00:11:07.127352 systemd-networkd[1430]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:11:07.144005 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 9 00:11:07.144049 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:07.144117 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:11:07.149484 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:11:07.152717 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:11:07.155464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:11:07.155971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:11:07.155998 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:11:07.156006 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:11:07.156238 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:11:07.156790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:11:07.167800 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:11:07.168122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:11:07.169107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:11:07.169473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:11:07.176830 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 9 00:11:07.179836 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 9 00:11:07.188402 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:11:07.188640 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:11:07.188758 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:11:07.187794 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:11:07.188276 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:11:07.188327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:11:07.201667 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:11:07.208409 kernel: EDAC MC: Ver: 3.0.0 May 9 00:11:07.234488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:11:07.238327 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 9 00:11:07.238382 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 9 00:11:07.242323 kernel: Console: switching to colour dummy device 80x25 May 9 00:11:07.244649 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 9 00:11:07.244687 kernel: [drm] features: -context_init May 9 00:11:07.246861 kernel: [drm] number of scanouts: 1 May 9 00:11:07.246891 kernel: [drm] number of cap sets: 0 May 9 00:11:07.247630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:11:07.247805 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:07.250346 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 9 00:11:07.256394 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 9 00:11:07.256429 kernel: Console: switching to colour frame buffer device 160x50 May 9 00:11:07.259907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:11:07.265252 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 9 00:11:07.275550 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:11:07.275702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:07.281430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:11:07.335293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:11:07.395238 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:11:07.399459 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:11:07.416950 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:11:07.441137 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:11:07.442872 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:11:07.443016 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:11:07.443238 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:11:07.443420 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:11:07.443800 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:11:07.444030 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:11:07.444137 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:11:07.444231 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:11:07.444267 systemd[1]: Reached target paths.target - Path Units. May 9 00:11:07.445119 systemd[1]: Reached target timers.target - Timer Units. May 9 00:11:07.455874 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:11:07.459380 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:11:07.465252 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:11:07.467146 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:11:07.470918 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:11:07.471160 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:11:07.471250 systemd[1]: Reached target basic.target - Basic System. May 9 00:11:07.472122 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:11:07.472175 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:11:07.475209 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:11:07.481533 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:11:07.487418 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 00:11:07.491898 systemd-timesyncd[1432]: Contacted time server 129.70.132.34:123 (0.flatcar.pool.ntp.org). May 9 00:11:07.492001 systemd-timesyncd[1432]: Initial clock synchronization to Fri 2025-05-09 00:11:07.559962 UTC. May 9 00:11:07.494197 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:11:07.496424 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:11:07.499445 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:11:07.500369 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:11:07.501972 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:11:07.505400 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:11:07.514455 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 9 00:11:07.520442 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:11:07.524943 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:11:07.534415 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:11:07.546199 jq[1488]: false May 9 00:11:07.548449 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:11:07.548845 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:11:07.551209 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:11:07.561356 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:11:07.563667 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:11:07.573613 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:11:07.574535 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:11:07.585565 jq[1501]: true May 9 00:11:07.590021 coreos-metadata[1484]: May 09 00:11:07.589 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 9 00:11:07.592653 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:11:07.592881 coreos-metadata[1484]: May 09 00:11:07.592 INFO Fetch successful May 9 00:11:07.592803 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:11:07.593029 coreos-metadata[1484]: May 09 00:11:07.592 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 9 00:11:07.599098 coreos-metadata[1484]: May 09 00:11:07.599 INFO Fetch successful May 9 00:11:07.608221 dbus-daemon[1485]: [system] SELinux support is enabled May 9 00:11:07.610448 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:11:07.623532 extend-filesystems[1489]: Found loop4 May 9 00:11:07.623532 extend-filesystems[1489]: Found loop5 May 9 00:11:07.623532 extend-filesystems[1489]: Found loop6 May 9 00:11:07.623532 extend-filesystems[1489]: Found loop7 May 9 00:11:07.623532 extend-filesystems[1489]: Found sda May 9 00:11:07.623532 extend-filesystems[1489]: Found sda1 May 9 00:11:07.623532 extend-filesystems[1489]: Found sda2 May 9 00:11:07.623532 extend-filesystems[1489]: Found sda3 May 9 00:11:07.623532 extend-filesystems[1489]: Found usr May 9 00:11:07.623532 extend-filesystems[1489]: Found sda4 May 9 00:11:07.623532 extend-filesystems[1489]: Found sda6 May 9 00:11:07.623532 extend-filesystems[1489]: Found sda7 May 9 00:11:07.623532 extend-filesystems[1489]: Found sda9 May 9 00:11:07.623532 extend-filesystems[1489]: Checking size of /dev/sda9 May 9 00:11:07.679627 tar[1510]: linux-amd64/helm May 9 00:11:07.684864 extend-filesystems[1489]: Resized partition /dev/sda9 May 9 00:11:07.632055 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:11:07.689822 update_engine[1498]: I20250509 00:11:07.684216 1498 main.cc:92] Flatcar Update Engine starting May 9 00:11:07.689974 jq[1516]: true May 9 00:11:07.690076 extend-filesystems[1530]: resize2fs 1.47.1 (20-May-2024) May 9 00:11:07.632185 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:11:07.633337 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:11:07.633367 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:11:07.633720 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:11:07.633733 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:11:07.649930 (ntainerd)[1517]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:11:07.710665 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 9 00:11:07.679833 systemd-logind[1495]: New seat seat0. May 9 00:11:07.685704 systemd-logind[1495]: Watching system buttons on /dev/input/event2 (Power Button) May 9 00:11:07.685717 systemd-logind[1495]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:11:07.685843 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:11:07.717880 update_engine[1498]: I20250509 00:11:07.711623 1498 update_check_scheduler.cc:74] Next update check in 4m43s May 9 00:11:07.711877 systemd[1]: Started update-engine.service - Update Engine. May 9 00:11:07.723485 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:11:07.746732 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 00:11:07.750766 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:11:07.777023 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1419) May 9 00:11:07.834660 bash[1554]: Updated "/home/core/.ssh/authorized_keys" May 9 00:11:07.836200 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:11:07.847225 systemd[1]: Starting sshkeys.service... May 9 00:11:07.874785 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 9 00:11:07.885540 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 9 00:11:07.905944 containerd[1517]: time="2025-05-09T00:11:07.903549564Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 00:11:07.911325 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 9 00:11:07.925940 locksmithd[1539]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:11:07.927061 extend-filesystems[1530]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 9 00:11:07.927061 extend-filesystems[1530]: old_desc_blocks = 1, new_desc_blocks = 5 May 9 00:11:07.927061 extend-filesystems[1530]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 9 00:11:07.937031 extend-filesystems[1489]: Resized filesystem in /dev/sda9 May 9 00:11:07.937031 extend-filesystems[1489]: Found sr0 May 9 00:11:07.937599 coreos-metadata[1565]: May 09 00:11:07.930 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 9 00:11:07.937599 coreos-metadata[1565]: May 09 00:11:07.931 INFO Fetch successful May 9 00:11:07.928591 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:11:07.928744 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:11:07.938370 unknown[1565]: wrote ssh authorized keys file for user: core May 9 00:11:07.946577 containerd[1517]: time="2025-05-09T00:11:07.944247443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:11:07.947126 containerd[1517]: time="2025-05-09T00:11:07.947103580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947370901Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947389596Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947519109Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947536832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947589120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947599610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947724314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947736026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947745834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947752887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947806908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:11:07.948405 containerd[1517]: time="2025-05-09T00:11:07.947954585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:11:07.948595 containerd[1517]: time="2025-05-09T00:11:07.948023255Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:11:07.948595 containerd[1517]: time="2025-05-09T00:11:07.948033654Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:11:07.948595 containerd[1517]: time="2025-05-09T00:11:07.948093857Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:11:07.948595 containerd[1517]: time="2025-05-09T00:11:07.948129914Z" level=info msg="metadata content store policy set" policy=shared May 9 00:11:07.954541 containerd[1517]: time="2025-05-09T00:11:07.954494491Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:11:07.955372 containerd[1517]: time="2025-05-09T00:11:07.954568170Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:11:07.955415 containerd[1517]: time="2025-05-09T00:11:07.955392566Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:11:07.955436 containerd[1517]: time="2025-05-09T00:11:07.955418294Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:11:07.955436 containerd[1517]: time="2025-05-09T00:11:07.955431569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:11:07.956212 containerd[1517]: time="2025-05-09T00:11:07.956184321Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:11:07.956427 containerd[1517]: time="2025-05-09T00:11:07.956406047Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:11:07.957124 update-ssh-keys[1575]: Updated "/home/core/.ssh/authorized_keys" May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.957999865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958019412Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958031244Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958042837Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958052264Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958061201Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958071500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958083803Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958093842Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958103070Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958112116Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958128397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958142223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:11:07.958913 containerd[1517]: time="2025-05-09T00:11:07.958151670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:11:07.957946 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958162912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958173461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958183260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958191796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958200933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958210090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958223776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958232673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958241619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958250827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958260855Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958278308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958290351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961084 containerd[1517]: time="2025-05-09T00:11:07.958320918Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:11:07.961264 containerd[1517]: time="2025-05-09T00:11:07.958361684Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:11:07.961264 containerd[1517]: time="2025-05-09T00:11:07.958377444Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:11:07.961264 containerd[1517]: time="2025-05-09T00:11:07.958386241Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:11:07.961264 containerd[1517]: time="2025-05-09T00:11:07.958395097Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:11:07.961264 containerd[1517]: time="2025-05-09T00:11:07.958402581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961264 containerd[1517]: time="2025-05-09T00:11:07.958412941Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:11:07.961264 containerd[1517]: time="2025-05-09T00:11:07.958420975Z" level=info msg="NRI interface is disabled by configuration." May 9 00:11:07.961264 containerd[1517]: time="2025-05-09T00:11:07.958428259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:11:07.961389 containerd[1517]: time="2025-05-09T00:11:07.958655054Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:11:07.961389 containerd[1517]: time="2025-05-09T00:11:07.958692816Z" level=info msg="Connect containerd service" May 9 00:11:07.961389 containerd[1517]: time="2025-05-09T00:11:07.958723573Z" level=info msg="using legacy CRI server" May 9 00:11:07.961389 containerd[1517]: time="2025-05-09T00:11:07.958728683Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:11:07.961389 containerd[1517]: time="2025-05-09T00:11:07.958810496Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:11:07.961389 containerd[1517]: time="2025-05-09T00:11:07.959186491Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:11:07.967926 containerd[1517]: time="2025-05-09T00:11:07.962419735Z" level=info msg="Start subscribing containerd event" May 9 00:11:07.967926 containerd[1517]: time="2025-05-09T00:11:07.962462906Z" level=info msg="Start recovering state" May 9 00:11:07.967926 containerd[1517]: time="2025-05-09T00:11:07.962519152Z" level=info msg="Start event monitor" May 9 00:11:07.967926 containerd[1517]: time="2025-05-09T00:11:07.962530052Z" level=info msg="Start snapshots syncer" May 9 00:11:07.967926 containerd[1517]: time="2025-05-09T00:11:07.962536454Z" level=info msg="Start cni network conf syncer for default" May 9 00:11:07.967926 containerd[1517]: time="2025-05-09T00:11:07.962543868Z" level=info msg="Start streaming server" May 9 00:11:07.967926 containerd[1517]: time="2025-05-09T00:11:07.962805388Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:11:07.967926 containerd[1517]: time="2025-05-09T00:11:07.962853448Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:11:07.964744 systemd[1]: Finished sshkeys.service. May 9 00:11:07.977035 containerd[1517]: time="2025-05-09T00:11:07.975353092Z" level=info msg="containerd successfully booted in 0.073968s" May 9 00:11:07.975387 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:11:08.006545 sshd_keygen[1506]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:11:08.024016 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:11:08.035523 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:11:08.050460 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:11:08.050605 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:11:08.058519 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:11:08.067813 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:11:08.078715 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:11:08.095737 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:11:08.097535 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:11:08.245820 tar[1510]: linux-amd64/LICENSE May 9 00:11:08.246063 tar[1510]: linux-amd64/README.md May 9 00:11:08.254130 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:11:08.566618 systemd-networkd[1430]: eth0: Gained IPv6LL May 9 00:11:08.570499 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:11:08.572022 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:11:08.588029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:11:08.595634 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:11:08.628923 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:11:09.014489 systemd-networkd[1430]: eth1: Gained IPv6LL May 9 00:11:09.407845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:11:09.410571 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:11:09.411862 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:11:09.419122 systemd[1]: Startup finished in 1.212s (kernel) + 9.524s (initrd) + 4.250s (userspace) = 14.986s. May 9 00:11:10.109455 kubelet[1614]: E0509 00:11:10.109351 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:11:10.111125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:11:10.111409 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:11:20.361964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:11:20.369686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:11:20.450559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:11:20.453224 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:11:20.488429 kubelet[1635]: E0509 00:11:20.488331 1635 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:11:20.490566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:11:20.490721 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:11:30.741572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:11:30.748802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:11:30.844896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:11:30.847493 (kubelet)[1651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:11:30.882468 kubelet[1651]: E0509 00:11:30.882394 1651 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:11:30.884807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:11:30.884924 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:11:41.135428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 9 00:11:41.140508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:11:41.209557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:11:41.212587 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:11:41.248136 kubelet[1667]: E0509 00:11:41.248099 1667 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:11:41.250399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:11:41.250527 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:11:51.387511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 9 00:11:51.392464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:11:51.463407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:11:51.466181 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:11:51.500828 kubelet[1683]: E0509 00:11:51.500773 1683 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:11:51.503255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:11:51.503422 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:11:53.447453 update_engine[1498]: I20250509 00:11:53.447363 1498 update_attempter.cc:509] Updating boot flags... May 9 00:11:53.479330 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1701) May 9 00:11:53.511378 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1703) May 9 00:11:53.538387 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1703) May 9 00:12:01.637759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 9 00:12:01.648586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:01.746438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:01.749019 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:12:01.778158 kubelet[1721]: E0509 00:12:01.778107 1721 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:12:01.780235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:12:01.780387 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:12:11.887413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 9 00:12:11.892492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:11.968662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:11.972418 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:12:12.007588 kubelet[1738]: E0509 00:12:12.007533 1738 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:12:12.009839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:12:12.009999 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:12:22.137622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 9 00:12:22.144573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:22.238960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:22.242205 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:12:22.273744 kubelet[1754]: E0509 00:12:22.273687 1754 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:12:22.275620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:12:22.275792 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:12:32.387847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 9 00:12:32.394640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:32.516955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:32.519773 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:12:32.554201 kubelet[1771]: E0509 00:12:32.554147 1771 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:12:32.556860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:12:32.556992 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:12:42.637874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 9 00:12:42.648586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:42.753779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:42.765482 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:12:42.795156 kubelet[1788]: E0509 00:12:42.795101 1788 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:12:42.796930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:12:42.797063 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:12:52.887494 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 9 00:12:52.892710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:12:52.970110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:12:52.972990 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:12:53.003824 kubelet[1804]: E0509 00:12:53.003750 1804 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:12:53.006050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:12:53.006173 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:12:55.347212 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:12:55.348247 systemd[1]: Started sshd@0-157.180.45.97:22-139.178.68.195:47800.service - OpenSSH per-connection server daemon (139.178.68.195:47800). May 9 00:12:56.321999 sshd[1814]: Accepted publickey for core from 139.178.68.195 port 47800 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:12:56.323745 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:12:56.331110 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:12:56.341506 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:12:56.343969 systemd-logind[1495]: New session 1 of user core. May 9 00:12:56.351468 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:12:56.358584 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:12:56.361567 (systemd)[1818]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:12:56.446891 systemd[1818]: Queued start job for default target default.target. May 9 00:12:56.453007 systemd[1818]: Created slice app.slice - User Application Slice. May 9 00:12:56.453029 systemd[1818]: Reached target paths.target - Paths. May 9 00:12:56.453040 systemd[1818]: Reached target timers.target - Timers. May 9 00:12:56.453971 systemd[1818]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:12:56.463460 systemd[1818]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:12:56.463500 systemd[1818]: Reached target sockets.target - Sockets. May 9 00:12:56.463511 systemd[1818]: Reached target basic.target - Basic System. May 9 00:12:56.463538 systemd[1818]: Reached target default.target - Main User Target. May 9 00:12:56.463559 systemd[1818]: Startup finished in 97ms. May 9 00:12:56.463799 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:12:56.466054 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:12:57.152136 systemd[1]: Started sshd@1-157.180.45.97:22-139.178.68.195:47804.service - OpenSSH per-connection server daemon (139.178.68.195:47804). May 9 00:12:58.119973 sshd[1829]: Accepted publickey for core from 139.178.68.195 port 47804 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:12:58.121698 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:12:58.126500 systemd-logind[1495]: New session 2 of user core. May 9 00:12:58.132432 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:12:58.790745 sshd[1831]: Connection closed by 139.178.68.195 port 47804 May 9 00:12:58.791808 sshd-session[1829]: pam_unix(sshd:session): session closed for user core May 9 00:12:58.795831 systemd-logind[1495]: Session 2 logged out. Waiting for processes to exit. May 9 00:12:58.795855 systemd[1]: sshd@1-157.180.45.97:22-139.178.68.195:47804.service: Deactivated successfully. May 9 00:12:58.798694 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:12:58.799908 systemd-logind[1495]: Removed session 2. May 9 00:12:58.967774 systemd[1]: Started sshd@2-157.180.45.97:22-139.178.68.195:47816.service - OpenSSH per-connection server daemon (139.178.68.195:47816). May 9 00:12:59.945533 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 47816 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:12:59.947426 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:12:59.955188 systemd-logind[1495]: New session 3 of user core. May 9 00:12:59.964540 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:13:00.613186 sshd[1838]: Connection closed by 139.178.68.195 port 47816 May 9 00:13:00.614100 sshd-session[1836]: pam_unix(sshd:session): session closed for user core May 9 00:13:00.617664 systemd[1]: sshd@2-157.180.45.97:22-139.178.68.195:47816.service: Deactivated successfully. May 9 00:13:00.620066 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:13:00.621643 systemd-logind[1495]: Session 3 logged out. Waiting for processes to exit. May 9 00:13:00.623478 systemd-logind[1495]: Removed session 3. May 9 00:13:00.778524 systemd[1]: Started sshd@3-157.180.45.97:22-139.178.68.195:47822.service - OpenSSH per-connection server daemon (139.178.68.195:47822). May 9 00:13:01.743703 sshd[1843]: Accepted publickey for core from 139.178.68.195 port 47822 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:13:01.745023 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:01.749122 systemd-logind[1495]: New session 4 of user core. May 9 00:13:01.756459 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:13:02.413206 sshd[1845]: Connection closed by 139.178.68.195 port 47822 May 9 00:13:02.414091 sshd-session[1843]: pam_unix(sshd:session): session closed for user core May 9 00:13:02.416977 systemd[1]: sshd@3-157.180.45.97:22-139.178.68.195:47822.service: Deactivated successfully. May 9 00:13:02.418242 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:13:02.420137 systemd-logind[1495]: Session 4 logged out. Waiting for processes to exit. May 9 00:13:02.422019 systemd-logind[1495]: Removed session 4. May 9 00:13:02.587752 systemd[1]: Started sshd@4-157.180.45.97:22-139.178.68.195:47836.service - OpenSSH per-connection server daemon (139.178.68.195:47836). May 9 00:13:03.137294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 9 00:13:03.143563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:13:03.215428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:13:03.227621 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:13:03.265917 kubelet[1860]: E0509 00:13:03.265871 1860 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:13:03.268240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:13:03.268395 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:13:03.559377 sshd[1850]: Accepted publickey for core from 139.178.68.195 port 47836 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:13:03.560858 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:03.565413 systemd-logind[1495]: New session 5 of user core. May 9 00:13:03.572481 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:13:04.082764 sudo[1870]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:13:04.083019 sudo[1870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:13:04.097855 sudo[1870]: pam_unix(sudo:session): session closed for user root May 9 00:13:04.254941 sshd[1869]: Connection closed by 139.178.68.195 port 47836 May 9 00:13:04.255667 sshd-session[1850]: pam_unix(sshd:session): session closed for user core May 9 00:13:04.258383 systemd[1]: sshd@4-157.180.45.97:22-139.178.68.195:47836.service: Deactivated successfully. May 9 00:13:04.259890 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:13:04.260799 systemd-logind[1495]: Session 5 logged out. Waiting for processes to exit. May 9 00:13:04.261754 systemd-logind[1495]: Removed session 5. May 9 00:13:04.426819 systemd[1]: Started sshd@5-157.180.45.97:22-139.178.68.195:47848.service - OpenSSH per-connection server daemon (139.178.68.195:47848). May 9 00:13:05.418843 sshd[1875]: Accepted publickey for core from 139.178.68.195 port 47848 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:13:05.420125 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:05.424168 systemd-logind[1495]: New session 6 of user core. May 9 00:13:05.433434 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:13:05.939417 sudo[1879]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:13:05.940076 sudo[1879]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:13:05.945800 sudo[1879]: pam_unix(sudo:session): session closed for user root May 9 00:13:05.953738 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 00:13:05.954445 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:13:05.976996 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:13:06.018036 augenrules[1901]: No rules May 9 00:13:06.019086 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:13:06.019638 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:13:06.022431 sudo[1878]: pam_unix(sudo:session): session closed for user root May 9 00:13:06.180147 sshd[1877]: Connection closed by 139.178.68.195 port 47848 May 9 00:13:06.180660 sshd-session[1875]: pam_unix(sshd:session): session closed for user core May 9 00:13:06.182958 systemd[1]: sshd@5-157.180.45.97:22-139.178.68.195:47848.service: Deactivated successfully. May 9 00:13:06.184234 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:13:06.185242 systemd-logind[1495]: Session 6 logged out. Waiting for processes to exit. May 9 00:13:06.186253 systemd-logind[1495]: Removed session 6. May 9 00:13:06.346620 systemd[1]: Started sshd@6-157.180.45.97:22-139.178.68.195:58028.service - OpenSSH per-connection server daemon (139.178.68.195:58028). May 9 00:13:07.325561 sshd[1909]: Accepted publickey for core from 139.178.68.195 port 58028 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:13:07.327605 sshd-session[1909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:13:07.334989 systemd-logind[1495]: New session 7 of user core. May 9 00:13:07.341558 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:13:07.846422 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:13:07.846863 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:13:08.158506 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:13:08.158584 (dockerd)[1931]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:13:08.368276 dockerd[1931]: time="2025-05-09T00:13:08.368068254Z" level=info msg="Starting up" May 9 00:13:08.448054 dockerd[1931]: time="2025-05-09T00:13:08.447630519Z" level=info msg="Loading containers: start." May 9 00:13:08.576333 kernel: Initializing XFRM netlink socket May 9 00:13:08.661517 systemd-networkd[1430]: docker0: Link UP May 9 00:13:08.693383 dockerd[1931]: time="2025-05-09T00:13:08.693328413Z" level=info msg="Loading containers: done." May 9 00:13:08.705571 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck284480608-merged.mount: Deactivated successfully. May 9 00:13:08.708089 dockerd[1931]: time="2025-05-09T00:13:08.708049237Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:13:08.708151 dockerd[1931]: time="2025-05-09T00:13:08.708135499Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 9 00:13:08.708249 dockerd[1931]: time="2025-05-09T00:13:08.708224037Z" level=info msg="Daemon has completed initialization" May 9 00:13:08.740092 dockerd[1931]: time="2025-05-09T00:13:08.740013606Z" level=info msg="API listen on /run/docker.sock" May 9 00:13:08.740542 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:13:11.002183 containerd[1517]: time="2025-05-09T00:13:11.002050476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 00:13:11.568343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929455184.mount: Deactivated successfully. May 9 00:13:12.738995 containerd[1517]: time="2025-05-09T00:13:12.738948922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:12.739837 containerd[1517]: time="2025-05-09T00:13:12.739805269Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674967" May 9 00:13:12.741027 containerd[1517]: time="2025-05-09T00:13:12.740993581Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:12.743351 containerd[1517]: time="2025-05-09T00:13:12.743315214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:12.744266 containerd[1517]: time="2025-05-09T00:13:12.744144398Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.742060619s" May 9 00:13:12.744266 containerd[1517]: time="2025-05-09T00:13:12.744169646Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 9 00:13:12.761067 containerd[1517]: time="2025-05-09T00:13:12.761037433Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 00:13:13.387293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 9 00:13:13.393402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:13:13.464796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:13:13.468250 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:13:13.502564 kubelet[2188]: E0509 00:13:13.502508 2188 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:13:13.504190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:13:13.504343 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:13:13.987403 containerd[1517]: time="2025-05-09T00:13:13.987339266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:13.988282 containerd[1517]: time="2025-05-09T00:13:13.988239574Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617556" May 9 00:13:13.989331 containerd[1517]: time="2025-05-09T00:13:13.989280790Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:13.991818 containerd[1517]: time="2025-05-09T00:13:13.991768826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:13.992604 containerd[1517]: time="2025-05-09T00:13:13.992514923Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.23145057s" May 9 00:13:13.992604 containerd[1517]: time="2025-05-09T00:13:13.992538309Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 9 00:13:14.011620 containerd[1517]: time="2025-05-09T00:13:14.011574788Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 00:13:14.997593 containerd[1517]: time="2025-05-09T00:13:14.997545204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:14.998489 containerd[1517]: time="2025-05-09T00:13:14.998456144Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903704" May 9 00:13:14.999447 containerd[1517]: time="2025-05-09T00:13:14.999414302Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:15.001650 containerd[1517]: time="2025-05-09T00:13:15.001612761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:15.002424 containerd[1517]: time="2025-05-09T00:13:15.002406079Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 990.796186ms" May 9 00:13:15.002549 containerd[1517]: time="2025-05-09T00:13:15.002478897Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 9 00:13:15.020624 containerd[1517]: time="2025-05-09T00:13:15.020597164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 00:13:15.961418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464173226.mount: Deactivated successfully. May 9 00:13:16.217893 containerd[1517]: time="2025-05-09T00:13:16.217794716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:16.218584 containerd[1517]: time="2025-05-09T00:13:16.218450955Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185845" May 9 00:13:16.219024 containerd[1517]: time="2025-05-09T00:13:16.219005161Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:16.220661 containerd[1517]: time="2025-05-09T00:13:16.220618716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:16.221157 containerd[1517]: time="2025-05-09T00:13:16.221118359Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.200366484s" May 9 00:13:16.221157 containerd[1517]: time="2025-05-09T00:13:16.221142143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 9 00:13:16.236790 containerd[1517]: time="2025-05-09T00:13:16.236753389Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:13:16.729389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157003640.mount: Deactivated successfully. May 9 00:13:17.446924 containerd[1517]: time="2025-05-09T00:13:17.446876427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:17.447817 containerd[1517]: time="2025-05-09T00:13:17.447779692Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" May 9 00:13:17.448610 containerd[1517]: time="2025-05-09T00:13:17.448554444Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:17.450994 containerd[1517]: time="2025-05-09T00:13:17.450953032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:17.452525 containerd[1517]: time="2025-05-09T00:13:17.452080010Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.215297103s" May 9 00:13:17.452525 containerd[1517]: time="2025-05-09T00:13:17.452114454Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 9 00:13:17.467765 containerd[1517]: time="2025-05-09T00:13:17.467707255Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 00:13:17.922595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258217936.mount: Deactivated successfully. May 9 00:13:17.929825 containerd[1517]: time="2025-05-09T00:13:17.929719548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:17.931416 containerd[1517]: time="2025-05-09T00:13:17.931230631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322312" May 9 00:13:17.932347 containerd[1517]: time="2025-05-09T00:13:17.932250836Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:17.936768 containerd[1517]: time="2025-05-09T00:13:17.936672562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:17.938226 containerd[1517]: time="2025-05-09T00:13:17.938047427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 470.284867ms" May 9 00:13:17.938226 containerd[1517]: time="2025-05-09T00:13:17.938092402Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 9 00:13:17.973485 containerd[1517]: time="2025-05-09T00:13:17.973440379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 00:13:18.503261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059904013.mount: Deactivated successfully. May 9 00:13:20.490994 containerd[1517]: time="2025-05-09T00:13:20.490944595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:20.491935 containerd[1517]: time="2025-05-09T00:13:20.491890551Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238653" May 9 00:13:20.492833 containerd[1517]: time="2025-05-09T00:13:20.492795429Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:20.495138 containerd[1517]: time="2025-05-09T00:13:20.495105037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:20.496249 containerd[1517]: time="2025-05-09T00:13:20.496146143Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.522659367s" May 9 00:13:20.496249 containerd[1517]: time="2025-05-09T00:13:20.496171881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 9 00:13:23.098506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:13:23.116568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:13:23.133607 systemd[1]: Reloading requested from client PID 2397 ('systemctl') (unit session-7.scope)... May 9 00:13:23.133624 systemd[1]: Reloading... May 9 00:13:23.210329 zram_generator::config[2437]: No configuration found. May 9 00:13:23.293631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:13:23.365886 systemd[1]: Reloading finished in 232 ms. May 9 00:13:23.420051 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:13:23.420152 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:13:23.420401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:13:23.422598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:13:23.530886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:13:23.533964 (kubelet)[2492]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:13:23.564592 kubelet[2492]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:13:23.564592 kubelet[2492]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:13:23.564592 kubelet[2492]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:13:23.566034 kubelet[2492]: I0509 00:13:23.565995 2492 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:13:23.745530 kubelet[2492]: I0509 00:13:23.745493 2492 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:13:23.745530 kubelet[2492]: I0509 00:13:23.745519 2492 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:13:23.746603 kubelet[2492]: I0509 00:13:23.746582 2492 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:13:23.764126 kubelet[2492]: I0509 00:13:23.763917 2492 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:13:23.767584 kubelet[2492]: E0509 00:13:23.767556 2492 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://157.180.45.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:23.779080 kubelet[2492]: I0509 00:13:23.779060 2492 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:13:23.782620 kubelet[2492]: I0509 00:13:23.782576 2492 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:13:23.782742 kubelet[2492]: I0509 00:13:23.782605 2492 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-n-8b48d2c086","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:13:23.782742 kubelet[2492]: I0509 00:13:23.782740 2492 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:13:23.782855 kubelet[2492]: I0509 00:13:23.782749 2492 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:13:23.782855 kubelet[2492]: I0509 00:13:23.782852 2492 state_mem.go:36] "Initialized new in-memory state store" May 9 00:13:23.783666 kubelet[2492]: I0509 00:13:23.783647 2492 kubelet.go:400] "Attempting to sync node with API server" May 9 00:13:23.783666 kubelet[2492]: I0509 00:13:23.783662 2492 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:13:23.784364 kubelet[2492]: I0509 00:13:23.783678 2492 kubelet.go:312] "Adding apiserver pod source" May 9 00:13:23.784364 kubelet[2492]: I0509 00:13:23.783687 2492 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:13:23.789337 kubelet[2492]: W0509 00:13:23.788843 2492 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.45.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:23.789337 kubelet[2492]: E0509 00:13:23.788887 2492 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.180.45.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:23.789337 kubelet[2492]: W0509 00:13:23.788934 2492 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.45.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-8b48d2c086&limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:23.789337 kubelet[2492]: E0509 00:13:23.788959 2492 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.180.45.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-8b48d2c086&limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:23.790033 kubelet[2492]: I0509 00:13:23.790020 2492 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:13:23.791496 kubelet[2492]: I0509 00:13:23.791470 2492 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:13:23.791547 kubelet[2492]: W0509 00:13:23.791514 2492 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:13:23.792959 kubelet[2492]: I0509 00:13:23.792705 2492 server.go:1264] "Started kubelet" May 9 00:13:23.798483 kubelet[2492]: I0509 00:13:23.798389 2492 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:13:23.799505 kubelet[2492]: I0509 00:13:23.799141 2492 server.go:455] "Adding debug handlers to kubelet server" May 9 00:13:23.800010 kubelet[2492]: I0509 00:13:23.799952 2492 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:13:23.800312 kubelet[2492]: I0509 00:13:23.800171 2492 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:13:23.801326 kubelet[2492]: I0509 00:13:23.801093 2492 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:13:23.801461 kubelet[2492]: E0509 00:13:23.801242 2492 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.45.97:6443/api/v1/namespaces/default/events\": dial tcp 157.180.45.97:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-3-n-8b48d2c086.183db3828d9e1e86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-n-8b48d2c086,UID:ci-4152-2-3-n-8b48d2c086,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-n-8b48d2c086,},FirstTimestamp:2025-05-09 00:13:23.792690822 +0000 UTC m=+0.256005147,LastTimestamp:2025-05-09 00:13:23.792690822 +0000 UTC m=+0.256005147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-n-8b48d2c086,}" May 9 00:13:23.804007 kubelet[2492]: I0509 00:13:23.803996 2492 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:13:23.804263 kubelet[2492]: I0509 00:13:23.804251 2492 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:13:23.805400 kubelet[2492]: I0509 00:13:23.805006 2492 reconciler.go:26] "Reconciler: start to sync state" May 9 00:13:23.805400 kubelet[2492]: W0509 00:13:23.805218 2492 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.45.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:23.805400 kubelet[2492]: E0509 00:13:23.805245 2492 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.180.45.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:23.806683 kubelet[2492]: E0509 00:13:23.806663 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.45.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-8b48d2c086?timeout=10s\": dial tcp 157.180.45.97:6443: connect: connection refused" interval="200ms" May 9 00:13:23.807490 kubelet[2492]: E0509 00:13:23.807462 2492 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:13:23.807915 kubelet[2492]: I0509 00:13:23.807893 2492 factory.go:221] Registration of the systemd container factory successfully May 9 00:13:23.807975 kubelet[2492]: I0509 00:13:23.807957 2492 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:13:23.808838 kubelet[2492]: I0509 00:13:23.808825 2492 factory.go:221] Registration of the containerd container factory successfully May 9 00:13:23.815655 kubelet[2492]: I0509 00:13:23.815612 2492 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:13:23.816644 kubelet[2492]: I0509 00:13:23.816627 2492 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:13:23.816986 kubelet[2492]: I0509 00:13:23.816718 2492 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:13:23.816986 kubelet[2492]: I0509 00:13:23.816740 2492 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:13:23.816986 kubelet[2492]: E0509 00:13:23.816789 2492 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:13:23.821981 kubelet[2492]: W0509 00:13:23.821948 2492 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.45.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:23.822024 kubelet[2492]: E0509 00:13:23.821983 2492 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.180.45.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:23.829247 kubelet[2492]: I0509 00:13:23.829236 2492 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:13:23.829373 kubelet[2492]: I0509 00:13:23.829366 2492 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:13:23.829591 kubelet[2492]: I0509 00:13:23.829449 2492 state_mem.go:36] "Initialized new in-memory state store" May 9 00:13:23.831211 kubelet[2492]: I0509 00:13:23.831170 2492 policy_none.go:49] "None policy: Start" May 9 00:13:23.831847 kubelet[2492]: I0509 00:13:23.831798 2492 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:13:23.832241 kubelet[2492]: I0509 00:13:23.831973 2492 state_mem.go:35] "Initializing new in-memory state store" May 9 00:13:23.838672 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:13:23.848137 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:13:23.850627 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:13:23.854887 kubelet[2492]: I0509 00:13:23.854873 2492 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:13:23.855360 kubelet[2492]: I0509 00:13:23.855100 2492 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:13:23.855360 kubelet[2492]: I0509 00:13:23.855178 2492 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:13:23.856713 kubelet[2492]: E0509 00:13:23.856702 2492 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-3-n-8b48d2c086\" not found" May 9 00:13:23.907617 kubelet[2492]: I0509 00:13:23.907564 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:23.908085 kubelet[2492]: E0509 00:13:23.908030 2492 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.45.97:6443/api/v1/nodes\": dial tcp 157.180.45.97:6443: connect: connection refused" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:23.917513 kubelet[2492]: I0509 00:13:23.917443 2492 topology_manager.go:215] "Topology Admit Handler" podUID="c0b8eec11126708f73a20eabb114ce30" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-3-n-8b48d2c086" May 9 00:13:23.919400 kubelet[2492]: I0509 00:13:23.918896 2492 topology_manager.go:215] "Topology Admit Handler" podUID="5652dc5d73f0d68a1640acaddd6a7e06" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:23.921334 kubelet[2492]: I0509 00:13:23.921100 2492 topology_manager.go:215] "Topology Admit Handler" podUID="e2d4b6d331d81deb6ec6be893d40f281" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-3-n-8b48d2c086" May 9 00:13:23.928971 systemd[1]: Created slice kubepods-burstable-podc0b8eec11126708f73a20eabb114ce30.slice - libcontainer container kubepods-burstable-podc0b8eec11126708f73a20eabb114ce30.slice. May 9 00:13:23.948466 systemd[1]: Created slice kubepods-burstable-pod5652dc5d73f0d68a1640acaddd6a7e06.slice - libcontainer container kubepods-burstable-pod5652dc5d73f0d68a1640acaddd6a7e06.slice. May 9 00:13:23.963658 systemd[1]: Created slice kubepods-burstable-pode2d4b6d331d81deb6ec6be893d40f281.slice - libcontainer container kubepods-burstable-pode2d4b6d331d81deb6ec6be893d40f281.slice. May 9 00:13:24.007505 kubelet[2492]: I0509 00:13:24.006720 2492 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2d4b6d331d81deb6ec6be893d40f281-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-n-8b48d2c086\" (UID: \"e2d4b6d331d81deb6ec6be893d40f281\") " pod="kube-system/kube-scheduler-ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.007505 kubelet[2492]: I0509 00:13:24.006801 2492 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0b8eec11126708f73a20eabb114ce30-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-n-8b48d2c086\" (UID: \"c0b8eec11126708f73a20eabb114ce30\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.007505 kubelet[2492]: I0509 00:13:24.006855 2492 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.007505 kubelet[2492]: I0509 00:13:24.006885 2492 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.007505 kubelet[2492]: I0509 00:13:24.006909 2492 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.007826 kubelet[2492]: I0509 00:13:24.006931 2492 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.007826 kubelet[2492]: I0509 00:13:24.006956 2492 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0b8eec11126708f73a20eabb114ce30-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-n-8b48d2c086\" (UID: \"c0b8eec11126708f73a20eabb114ce30\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.007826 kubelet[2492]: I0509 00:13:24.006980 2492 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0b8eec11126708f73a20eabb114ce30-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-n-8b48d2c086\" (UID: \"c0b8eec11126708f73a20eabb114ce30\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.007826 kubelet[2492]: I0509 00:13:24.007002 2492 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.008831 kubelet[2492]: E0509 00:13:24.008657 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.45.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-8b48d2c086?timeout=10s\": dial tcp 157.180.45.97:6443: connect: connection refused" interval="400ms" May 9 00:13:24.110946 kubelet[2492]: I0509 00:13:24.110808 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.111417 kubelet[2492]: E0509 00:13:24.111341 2492 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.45.97:6443/api/v1/nodes\": dial tcp 157.180.45.97:6443: connect: connection refused" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.247062 containerd[1517]: time="2025-05-09T00:13:24.246974206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-n-8b48d2c086,Uid:c0b8eec11126708f73a20eabb114ce30,Namespace:kube-system,Attempt:0,}" May 9 00:13:24.261775 containerd[1517]: time="2025-05-09T00:13:24.261245112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-n-8b48d2c086,Uid:5652dc5d73f0d68a1640acaddd6a7e06,Namespace:kube-system,Attempt:0,}" May 9 00:13:24.267138 containerd[1517]: time="2025-05-09T00:13:24.267093110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-n-8b48d2c086,Uid:e2d4b6d331d81deb6ec6be893d40f281,Namespace:kube-system,Attempt:0,}" May 9 00:13:24.409656 kubelet[2492]: E0509 00:13:24.409605 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.45.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-8b48d2c086?timeout=10s\": dial tcp 157.180.45.97:6443: connect: connection refused" interval="800ms" May 9 00:13:24.514740 kubelet[2492]: I0509 00:13:24.514520 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.515119 kubelet[2492]: E0509 00:13:24.515047 2492 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.45.97:6443/api/v1/nodes\": dial tcp 157.180.45.97:6443: connect: connection refused" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:24.770930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196185014.mount: Deactivated successfully. May 9 00:13:24.781529 containerd[1517]: time="2025-05-09T00:13:24.781451196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:13:24.784546 containerd[1517]: time="2025-05-09T00:13:24.784487247Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" May 9 00:13:24.785642 containerd[1517]: time="2025-05-09T00:13:24.785578166Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:13:24.787650 containerd[1517]: time="2025-05-09T00:13:24.787611012Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:13:24.788576 containerd[1517]: time="2025-05-09T00:13:24.788462851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:13:24.789919 containerd[1517]: time="2025-05-09T00:13:24.789862031Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:13:24.790630 containerd[1517]: time="2025-05-09T00:13:24.790516667Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:13:24.792965 containerd[1517]: time="2025-05-09T00:13:24.792917919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:13:24.795107 containerd[1517]: time="2025-05-09T00:13:24.794777449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.584039ms" May 9 00:13:24.796178 containerd[1517]: time="2025-05-09T00:13:24.796111106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 534.733524ms" May 9 00:13:24.799471 containerd[1517]: time="2025-05-09T00:13:24.799431814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.984851ms" May 9 00:13:24.825354 kubelet[2492]: W0509 00:13:24.824589 2492 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.45.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-8b48d2c086&limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:24.825354 kubelet[2492]: E0509 00:13:24.824638 2492 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.180.45.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-8b48d2c086&limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:24.910814 containerd[1517]: time="2025-05-09T00:13:24.910083538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:13:24.910814 containerd[1517]: time="2025-05-09T00:13:24.910135908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:13:24.910814 containerd[1517]: time="2025-05-09T00:13:24.910149553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:24.910814 containerd[1517]: time="2025-05-09T00:13:24.910218523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:24.916347 containerd[1517]: time="2025-05-09T00:13:24.915538314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:13:24.917546 containerd[1517]: time="2025-05-09T00:13:24.916278481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:13:24.917546 containerd[1517]: time="2025-05-09T00:13:24.916358583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:24.917722 containerd[1517]: time="2025-05-09T00:13:24.917658987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:24.919925 containerd[1517]: time="2025-05-09T00:13:24.919868938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:13:24.919987 containerd[1517]: time="2025-05-09T00:13:24.919906248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:13:24.919987 containerd[1517]: time="2025-05-09T00:13:24.919919063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:24.920070 containerd[1517]: time="2025-05-09T00:13:24.919982743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:24.930466 systemd[1]: Started cri-containerd-f28e6fd88470139a5b3aebb758a9504b7464d62d52095dd6cae5bcf2252ffc78.scope - libcontainer container f28e6fd88470139a5b3aebb758a9504b7464d62d52095dd6cae5bcf2252ffc78. May 9 00:13:24.952549 systemd[1]: Started cri-containerd-a276fbaa00ee70592d622a5a8898d407a279fdb59676e262d6703bcba9002cdc.scope - libcontainer container a276fbaa00ee70592d622a5a8898d407a279fdb59676e262d6703bcba9002cdc. May 9 00:13:24.955736 systemd[1]: Started cri-containerd-2e5cf8450fc64e43c532772d892a13af203af10cee9ac8154b1662e6fb6d5f42.scope - libcontainer container 2e5cf8450fc64e43c532772d892a13af203af10cee9ac8154b1662e6fb6d5f42. May 9 00:13:24.986884 containerd[1517]: time="2025-05-09T00:13:24.986853742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-n-8b48d2c086,Uid:c0b8eec11126708f73a20eabb114ce30,Namespace:kube-system,Attempt:0,} returns sandbox id \"f28e6fd88470139a5b3aebb758a9504b7464d62d52095dd6cae5bcf2252ffc78\"" May 9 00:13:24.993539 containerd[1517]: time="2025-05-09T00:13:24.993501829Z" level=info msg="CreateContainer within sandbox \"f28e6fd88470139a5b3aebb758a9504b7464d62d52095dd6cae5bcf2252ffc78\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:13:25.008335 containerd[1517]: time="2025-05-09T00:13:25.008285263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-n-8b48d2c086,Uid:e2d4b6d331d81deb6ec6be893d40f281,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e5cf8450fc64e43c532772d892a13af203af10cee9ac8154b1662e6fb6d5f42\"" May 9 00:13:25.010504 containerd[1517]: time="2025-05-09T00:13:25.010413399Z" level=info msg="CreateContainer within sandbox \"2e5cf8450fc64e43c532772d892a13af203af10cee9ac8154b1662e6fb6d5f42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:13:25.015109 kubelet[2492]: W0509 00:13:25.015014 2492 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.45.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:25.015109 kubelet[2492]: E0509 00:13:25.015090 2492 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.180.45.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:25.015452 containerd[1517]: time="2025-05-09T00:13:25.015434538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-n-8b48d2c086,Uid:5652dc5d73f0d68a1640acaddd6a7e06,Namespace:kube-system,Attempt:0,} returns sandbox id \"a276fbaa00ee70592d622a5a8898d407a279fdb59676e262d6703bcba9002cdc\"" May 9 00:13:25.017156 containerd[1517]: time="2025-05-09T00:13:25.017079973Z" level=info msg="CreateContainer within sandbox \"f28e6fd88470139a5b3aebb758a9504b7464d62d52095dd6cae5bcf2252ffc78\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e95c20c049504276d6c19ef6ef93c115955e433b33e24d0baa0364bcfaf92865\"" May 9 00:13:25.017470 containerd[1517]: time="2025-05-09T00:13:25.017294949Z" level=info msg="CreateContainer within sandbox \"a276fbaa00ee70592d622a5a8898d407a279fdb59676e262d6703bcba9002cdc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:13:25.018957 containerd[1517]: time="2025-05-09T00:13:25.018897433Z" level=info msg="StartContainer for \"e95c20c049504276d6c19ef6ef93c115955e433b33e24d0baa0364bcfaf92865\"" May 9 00:13:25.023987 containerd[1517]: time="2025-05-09T00:13:25.023845673Z" level=info msg="CreateContainer within sandbox \"2e5cf8450fc64e43c532772d892a13af203af10cee9ac8154b1662e6fb6d5f42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a\"" May 9 00:13:25.025241 containerd[1517]: time="2025-05-09T00:13:25.024416260Z" level=info msg="StartContainer for \"c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a\"" May 9 00:13:25.031499 containerd[1517]: time="2025-05-09T00:13:25.031456058Z" level=info msg="CreateContainer within sandbox \"a276fbaa00ee70592d622a5a8898d407a279fdb59676e262d6703bcba9002cdc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04\"" May 9 00:13:25.031980 containerd[1517]: time="2025-05-09T00:13:25.031959127Z" level=info msg="StartContainer for \"ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04\"" May 9 00:13:25.043628 systemd[1]: Started cri-containerd-e95c20c049504276d6c19ef6ef93c115955e433b33e24d0baa0364bcfaf92865.scope - libcontainer container e95c20c049504276d6c19ef6ef93c115955e433b33e24d0baa0364bcfaf92865. May 9 00:13:25.060760 systemd[1]: Started cri-containerd-c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a.scope - libcontainer container c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a. May 9 00:13:25.068444 systemd[1]: Started cri-containerd-ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04.scope - libcontainer container ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04. May 9 00:13:25.103754 containerd[1517]: time="2025-05-09T00:13:25.103269686Z" level=info msg="StartContainer for \"e95c20c049504276d6c19ef6ef93c115955e433b33e24d0baa0364bcfaf92865\" returns successfully" May 9 00:13:25.112696 containerd[1517]: time="2025-05-09T00:13:25.112669588Z" level=info msg="StartContainer for \"c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a\" returns successfully" May 9 00:13:25.123369 containerd[1517]: time="2025-05-09T00:13:25.123345959Z" level=info msg="StartContainer for \"ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04\" returns successfully" May 9 00:13:25.211420 kubelet[2492]: E0509 00:13:25.210456 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.45.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-8b48d2c086?timeout=10s\": dial tcp 157.180.45.97:6443: connect: connection refused" interval="1.6s" May 9 00:13:25.267443 kubelet[2492]: W0509 00:13:25.267294 2492 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.45.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:25.267443 kubelet[2492]: E0509 00:13:25.267419 2492 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.180.45.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:25.305062 kubelet[2492]: W0509 00:13:25.304946 2492 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.45.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:25.305312 kubelet[2492]: E0509 00:13:25.305282 2492 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.180.45.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.45.97:6443: connect: connection refused May 9 00:13:25.317093 kubelet[2492]: I0509 00:13:25.317042 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:25.317421 kubelet[2492]: E0509 00:13:25.317391 2492 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.45.97:6443/api/v1/nodes\": dial tcp 157.180.45.97:6443: connect: connection refused" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:26.816380 kubelet[2492]: E0509 00:13:26.816289 2492 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-3-n-8b48d2c086\" not found" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:26.859176 kubelet[2492]: E0509 00:13:26.859124 2492 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4152-2-3-n-8b48d2c086" not found May 9 00:13:26.920913 kubelet[2492]: I0509 00:13:26.920795 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:26.939263 kubelet[2492]: I0509 00:13:26.939175 2492 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:26.949427 kubelet[2492]: E0509 00:13:26.949351 2492 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-8b48d2c086\" not found" May 9 00:13:27.050558 kubelet[2492]: E0509 00:13:27.050485 2492 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-8b48d2c086\" not found" May 9 00:13:27.151224 kubelet[2492]: E0509 00:13:27.151178 2492 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-8b48d2c086\" not found" May 9 00:13:27.251862 kubelet[2492]: E0509 00:13:27.251818 2492 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-8b48d2c086\" not found" May 9 00:13:27.352296 kubelet[2492]: E0509 00:13:27.352247 2492 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-8b48d2c086\" not found" May 9 00:13:27.452916 kubelet[2492]: E0509 00:13:27.452790 2492 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-8b48d2c086\" not found" May 9 00:13:27.554022 kubelet[2492]: E0509 00:13:27.553972 2492 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-8b48d2c086\" not found" May 9 00:13:27.791620 kubelet[2492]: I0509 00:13:27.791277 2492 apiserver.go:52] "Watching apiserver" May 9 00:13:27.804879 kubelet[2492]: I0509 00:13:27.804855 2492 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:13:28.480867 systemd[1]: Reloading requested from client PID 2767 ('systemctl') (unit session-7.scope)... May 9 00:13:28.480896 systemd[1]: Reloading... May 9 00:13:28.561378 zram_generator::config[2804]: No configuration found. May 9 00:13:28.644747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:13:28.712360 systemd[1]: Reloading finished in 230 ms. May 9 00:13:28.744245 kubelet[2492]: E0509 00:13:28.743424 2492 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4152-2-3-n-8b48d2c086.183db3828d9e1e86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-n-8b48d2c086,UID:ci-4152-2-3-n-8b48d2c086,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-n-8b48d2c086,},FirstTimestamp:2025-05-09 00:13:23.792690822 +0000 UTC m=+0.256005147,LastTimestamp:2025-05-09 00:13:23.792690822 +0000 UTC m=+0.256005147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-n-8b48d2c086,}" May 9 00:13:28.745588 kubelet[2492]: I0509 00:13:28.744347 2492 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:13:28.744839 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:13:28.756081 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:13:28.756237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:13:28.760778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:13:28.845862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:13:28.849706 (kubelet)[2858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:13:28.885700 kubelet[2858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:13:28.885700 kubelet[2858]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:13:28.885700 kubelet[2858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:13:28.886012 kubelet[2858]: I0509 00:13:28.885764 2858 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:13:28.891887 kubelet[2858]: I0509 00:13:28.891705 2858 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:13:28.892043 kubelet[2858]: I0509 00:13:28.892033 2858 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:13:28.892490 kubelet[2858]: I0509 00:13:28.892314 2858 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:13:28.894608 kubelet[2858]: I0509 00:13:28.894596 2858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:13:28.896370 kubelet[2858]: I0509 00:13:28.896337 2858 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:13:28.900733 kubelet[2858]: I0509 00:13:28.900713 2858 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:13:28.901026 kubelet[2858]: I0509 00:13:28.901004 2858 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:13:28.901193 kubelet[2858]: I0509 00:13:28.901074 2858 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-n-8b48d2c086","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:13:28.901296 kubelet[2858]: I0509 00:13:28.901288 2858 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:13:28.901374 kubelet[2858]: I0509 00:13:28.901366 2858 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:13:28.901456 kubelet[2858]: I0509 00:13:28.901449 2858 state_mem.go:36] "Initialized new in-memory state store" May 9 00:13:28.901567 kubelet[2858]: I0509 00:13:28.901558 2858 kubelet.go:400] "Attempting to sync node with API server" May 9 00:13:28.901623 kubelet[2858]: I0509 00:13:28.901615 2858 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:13:28.901673 kubelet[2858]: I0509 00:13:28.901667 2858 kubelet.go:312] "Adding apiserver pod source" May 9 00:13:28.901718 kubelet[2858]: I0509 00:13:28.901712 2858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:13:28.902571 kubelet[2858]: I0509 00:13:28.902558 2858 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:13:28.903918 kubelet[2858]: I0509 00:13:28.903906 2858 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:13:28.904228 kubelet[2858]: I0509 00:13:28.904217 2858 server.go:1264] "Started kubelet" May 9 00:13:28.908152 kubelet[2858]: I0509 00:13:28.908057 2858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:13:28.914111 kubelet[2858]: I0509 00:13:28.914051 2858 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:13:28.916810 kubelet[2858]: I0509 00:13:28.916794 2858 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:13:28.917027 kubelet[2858]: I0509 00:13:28.917004 2858 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:13:28.917109 kubelet[2858]: I0509 00:13:28.917094 2858 reconciler.go:26] "Reconciler: start to sync state" May 9 00:13:28.921220 kubelet[2858]: I0509 00:13:28.921193 2858 server.go:455] "Adding debug handlers to kubelet server" May 9 00:13:28.921983 kubelet[2858]: I0509 00:13:28.921942 2858 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:13:28.923349 kubelet[2858]: I0509 00:13:28.922117 2858 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:13:28.923349 kubelet[2858]: I0509 00:13:28.922593 2858 factory.go:221] Registration of the systemd container factory successfully May 9 00:13:28.923349 kubelet[2858]: I0509 00:13:28.922640 2858 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:13:28.925404 kubelet[2858]: E0509 00:13:28.925368 2858 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:13:28.925460 kubelet[2858]: I0509 00:13:28.925427 2858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:13:28.926201 kubelet[2858]: I0509 00:13:28.926189 2858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:13:28.926263 kubelet[2858]: I0509 00:13:28.926256 2858 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:13:28.926349 kubelet[2858]: I0509 00:13:28.926341 2858 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:13:28.926461 kubelet[2858]: E0509 00:13:28.926448 2858 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:13:28.927035 kubelet[2858]: I0509 00:13:28.927013 2858 factory.go:221] Registration of the containerd container factory successfully May 9 00:13:28.970766 kubelet[2858]: I0509 00:13:28.970710 2858 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:13:28.970766 kubelet[2858]: I0509 00:13:28.970745 2858 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:13:28.970766 kubelet[2858]: I0509 00:13:28.970759 2858 state_mem.go:36] "Initialized new in-memory state store" May 9 00:13:28.970908 kubelet[2858]: I0509 00:13:28.970856 2858 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:13:28.970908 kubelet[2858]: I0509 00:13:28.970867 2858 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:13:28.970908 kubelet[2858]: I0509 00:13:28.970887 2858 policy_none.go:49] "None policy: Start" May 9 00:13:28.971462 kubelet[2858]: I0509 00:13:28.971406 2858 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:13:28.971462 kubelet[2858]: I0509 00:13:28.971423 2858 state_mem.go:35] "Initializing new in-memory state store" May 9 00:13:28.971542 kubelet[2858]: I0509 00:13:28.971529 2858 state_mem.go:75] "Updated machine memory state" May 9 00:13:28.974707 kubelet[2858]: I0509 00:13:28.974689 2858 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:13:28.975005 kubelet[2858]: I0509 00:13:28.974815 2858 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:13:28.975005 kubelet[2858]: I0509 00:13:28.974925 2858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:13:29.019879 kubelet[2858]: I0509 00:13:29.019801 2858 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.026614 kubelet[2858]: I0509 00:13:29.026594 2858 topology_manager.go:215] "Topology Admit Handler" podUID="e2d4b6d331d81deb6ec6be893d40f281" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.026699 kubelet[2858]: I0509 00:13:29.026682 2858 topology_manager.go:215] "Topology Admit Handler" podUID="c0b8eec11126708f73a20eabb114ce30" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.026733 kubelet[2858]: I0509 00:13:29.026717 2858 topology_manager.go:215] "Topology Admit Handler" podUID="5652dc5d73f0d68a1640acaddd6a7e06" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.029075 kubelet[2858]: I0509 00:13:29.028971 2858 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.029075 kubelet[2858]: I0509 00:13:29.029012 2858 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.218778 kubelet[2858]: I0509 00:13:29.218731 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2d4b6d331d81deb6ec6be893d40f281-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-n-8b48d2c086\" (UID: \"e2d4b6d331d81deb6ec6be893d40f281\") " pod="kube-system/kube-scheduler-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.218778 kubelet[2858]: I0509 00:13:29.218775 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0b8eec11126708f73a20eabb114ce30-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-n-8b48d2c086\" (UID: \"c0b8eec11126708f73a20eabb114ce30\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.218778 kubelet[2858]: I0509 00:13:29.218793 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0b8eec11126708f73a20eabb114ce30-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-n-8b48d2c086\" (UID: \"c0b8eec11126708f73a20eabb114ce30\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.218778 kubelet[2858]: I0509 00:13:29.218811 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0b8eec11126708f73a20eabb114ce30-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-n-8b48d2c086\" (UID: \"c0b8eec11126708f73a20eabb114ce30\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.218778 kubelet[2858]: I0509 00:13:29.218828 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.219109 kubelet[2858]: I0509 00:13:29.218843 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.219109 kubelet[2858]: I0509 00:13:29.218857 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.219109 kubelet[2858]: I0509 00:13:29.218871 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.219109 kubelet[2858]: I0509 00:13:29.218894 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5652dc5d73f0d68a1640acaddd6a7e06-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-n-8b48d2c086\" (UID: \"5652dc5d73f0d68a1640acaddd6a7e06\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.497464 sudo[2889]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 00:13:29.498290 sudo[2889]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 00:13:29.902313 kubelet[2858]: I0509 00:13:29.902069 2858 apiserver.go:52] "Watching apiserver" May 9 00:13:29.918099 kubelet[2858]: I0509 00:13:29.918051 2858 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:13:29.966522 sudo[2889]: pam_unix(sudo:session): session closed for user root May 9 00:13:29.967468 kubelet[2858]: E0509 00:13:29.967349 2858 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-3-n-8b48d2c086\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-3-n-8b48d2c086" May 9 00:13:29.984621 kubelet[2858]: I0509 00:13:29.984460 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-8b48d2c086" podStartSLOduration=0.984445659 podStartE2EDuration="984.445659ms" podCreationTimestamp="2025-05-09 00:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:13:29.981620397 +0000 UTC m=+1.125846252" watchObservedRunningTime="2025-05-09 00:13:29.984445659 +0000 UTC m=+1.128671505" May 9 00:13:29.992610 kubelet[2858]: I0509 00:13:29.992572 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-3-n-8b48d2c086" podStartSLOduration=0.992559525 podStartE2EDuration="992.559525ms" podCreationTimestamp="2025-05-09 00:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:13:29.992272703 +0000 UTC m=+1.136498558" watchObservedRunningTime="2025-05-09 00:13:29.992559525 +0000 UTC m=+1.136785370" May 9 00:13:30.007179 kubelet[2858]: I0509 00:13:30.007138 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-3-n-8b48d2c086" podStartSLOduration=1.007121591 podStartE2EDuration="1.007121591s" podCreationTimestamp="2025-05-09 00:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:13:30.000423138 +0000 UTC m=+1.144648993" watchObservedRunningTime="2025-05-09 00:13:30.007121591 +0000 UTC m=+1.151347436" May 9 00:13:31.545246 sudo[1912]: pam_unix(sudo:session): session closed for user root May 9 00:13:31.702459 sshd[1911]: Connection closed by 139.178.68.195 port 58028 May 9 00:13:31.703848 sshd-session[1909]: pam_unix(sshd:session): session closed for user core May 9 00:13:31.706373 systemd[1]: sshd@6-157.180.45.97:22-139.178.68.195:58028.service: Deactivated successfully. May 9 00:13:31.707914 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:13:31.708102 systemd[1]: session-7.scope: Consumed 4.315s CPU time, 185.1M memory peak, 0B memory swap peak. May 9 00:13:31.709048 systemd-logind[1495]: Session 7 logged out. Waiting for processes to exit. May 9 00:13:31.710423 systemd-logind[1495]: Removed session 7. May 9 00:13:43.435073 kubelet[2858]: I0509 00:13:43.435045 2858 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:13:43.435807 containerd[1517]: time="2025-05-09T00:13:43.435564836Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:13:43.436085 kubelet[2858]: I0509 00:13:43.436019 2858 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:13:43.466683 kubelet[2858]: I0509 00:13:43.466640 2858 topology_manager.go:215] "Topology Admit Handler" podUID="6bffb258-f090-4cc7-bbfd-04965d8552e6" podNamespace="kube-system" podName="kube-proxy-k5l55" May 9 00:13:43.471278 kubelet[2858]: W0509 00:13:43.471252 2858 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152-2-3-n-8b48d2c086" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-8b48d2c086' and this object May 9 00:13:43.471278 kubelet[2858]: E0509 00:13:43.471277 2858 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152-2-3-n-8b48d2c086" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-8b48d2c086' and this object May 9 00:13:43.471475 kubelet[2858]: W0509 00:13:43.471448 2858 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152-2-3-n-8b48d2c086" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-8b48d2c086' and this object May 9 00:13:43.471523 kubelet[2858]: E0509 00:13:43.471476 2858 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152-2-3-n-8b48d2c086" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-3-n-8b48d2c086' and this object May 9 00:13:43.474391 kubelet[2858]: I0509 00:13:43.474347 2858 topology_manager.go:215] "Topology Admit Handler" podUID="8dcedc0e-11a9-42de-b292-9e0db07cf3f3" podNamespace="kube-system" podName="cilium-b84b5" May 9 00:13:43.475952 systemd[1]: Created slice kubepods-besteffort-pod6bffb258_f090_4cc7_bbfd_04965d8552e6.slice - libcontainer container kubepods-besteffort-pod6bffb258_f090_4cc7_bbfd_04965d8552e6.slice. May 9 00:13:43.488674 systemd[1]: Created slice kubepods-burstable-pod8dcedc0e_11a9_42de_b292_9e0db07cf3f3.slice - libcontainer container kubepods-burstable-pod8dcedc0e_11a9_42de_b292_9e0db07cf3f3.slice. May 9 00:13:43.509015 kubelet[2858]: I0509 00:13:43.508647 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-host-proc-sys-kernel\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509015 kubelet[2858]: I0509 00:13:43.508686 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg4rk\" (UniqueName: \"kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-kube-api-access-fg4rk\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509015 kubelet[2858]: I0509 00:13:43.508703 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-lib-modules\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509015 kubelet[2858]: I0509 00:13:43.508716 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6gll\" (UniqueName: \"kubernetes.io/projected/6bffb258-f090-4cc7-bbfd-04965d8552e6-kube-api-access-f6gll\") pod \"kube-proxy-k5l55\" (UID: \"6bffb258-f090-4cc7-bbfd-04965d8552e6\") " pod="kube-system/kube-proxy-k5l55" May 9 00:13:43.509015 kubelet[2858]: I0509 00:13:43.508741 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-etc-cni-netd\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509197 kubelet[2858]: I0509 00:13:43.508752 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-xtables-lock\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509197 kubelet[2858]: I0509 00:13:43.508763 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-config-path\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509197 kubelet[2858]: I0509 00:13:43.508778 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-run\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509197 kubelet[2858]: I0509 00:13:43.508789 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-host-proc-sys-net\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509197 kubelet[2858]: I0509 00:13:43.508801 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bffb258-f090-4cc7-bbfd-04965d8552e6-xtables-lock\") pod \"kube-proxy-k5l55\" (UID: \"6bffb258-f090-4cc7-bbfd-04965d8552e6\") " pod="kube-system/kube-proxy-k5l55" May 9 00:13:43.509197 kubelet[2858]: I0509 00:13:43.508818 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-hubble-tls\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509318 kubelet[2858]: I0509 00:13:43.508831 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-bpf-maps\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509318 kubelet[2858]: I0509 00:13:43.508845 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bffb258-f090-4cc7-bbfd-04965d8552e6-lib-modules\") pod \"kube-proxy-k5l55\" (UID: \"6bffb258-f090-4cc7-bbfd-04965d8552e6\") " pod="kube-system/kube-proxy-k5l55" May 9 00:13:43.509318 kubelet[2858]: I0509 00:13:43.508856 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-hostproc\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509318 kubelet[2858]: I0509 00:13:43.508866 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cni-path\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509318 kubelet[2858]: I0509 00:13:43.508876 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6bffb258-f090-4cc7-bbfd-04965d8552e6-kube-proxy\") pod \"kube-proxy-k5l55\" (UID: \"6bffb258-f090-4cc7-bbfd-04965d8552e6\") " pod="kube-system/kube-proxy-k5l55" May 9 00:13:43.509318 kubelet[2858]: I0509 00:13:43.508887 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-cgroup\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.509433 kubelet[2858]: I0509 00:13:43.508900 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-clustermesh-secrets\") pod \"cilium-b84b5\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " pod="kube-system/cilium-b84b5" May 9 00:13:43.653581 kubelet[2858]: I0509 00:13:43.652847 2858 topology_manager.go:215] "Topology Admit Handler" podUID="789409de-01e7-47e9-940b-9208b464f021" podNamespace="kube-system" podName="cilium-operator-599987898-g8djb" May 9 00:13:43.659807 systemd[1]: Created slice kubepods-besteffort-pod789409de_01e7_47e9_940b_9208b464f021.slice - libcontainer container kubepods-besteffort-pod789409de_01e7_47e9_940b_9208b464f021.slice. May 9 00:13:43.710528 kubelet[2858]: I0509 00:13:43.710405 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/789409de-01e7-47e9-940b-9208b464f021-cilium-config-path\") pod \"cilium-operator-599987898-g8djb\" (UID: \"789409de-01e7-47e9-940b-9208b464f021\") " pod="kube-system/cilium-operator-599987898-g8djb" May 9 00:13:43.710528 kubelet[2858]: I0509 00:13:43.710464 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sxff\" (UniqueName: \"kubernetes.io/projected/789409de-01e7-47e9-940b-9208b464f021-kube-api-access-4sxff\") pod \"cilium-operator-599987898-g8djb\" (UID: \"789409de-01e7-47e9-940b-9208b464f021\") " pod="kube-system/cilium-operator-599987898-g8djb" May 9 00:13:44.625455 kubelet[2858]: E0509 00:13:44.625378 2858 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.626068 kubelet[2858]: E0509 00:13:44.625539 2858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bffb258-f090-4cc7-bbfd-04965d8552e6-kube-proxy podName:6bffb258-f090-4cc7-bbfd-04965d8552e6 nodeName:}" failed. No retries permitted until 2025-05-09 00:13:45.125507144 +0000 UTC m=+16.269733029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/6bffb258-f090-4cc7-bbfd-04965d8552e6-kube-proxy") pod "kube-proxy-k5l55" (UID: "6bffb258-f090-4cc7-bbfd-04965d8552e6") : failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.644775 kubelet[2858]: E0509 00:13:44.644719 2858 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.644775 kubelet[2858]: E0509 00:13:44.644772 2858 projected.go:200] Error preparing data for projected volume kube-api-access-fg4rk for pod kube-system/cilium-b84b5: failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.645020 kubelet[2858]: E0509 00:13:44.644856 2858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-kube-api-access-fg4rk podName:8dcedc0e-11a9-42de-b292-9e0db07cf3f3 nodeName:}" failed. No retries permitted until 2025-05-09 00:13:45.144832753 +0000 UTC m=+16.289058628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fg4rk" (UniqueName: "kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-kube-api-access-fg4rk") pod "cilium-b84b5" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3") : failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.645363 kubelet[2858]: E0509 00:13:44.645221 2858 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.645363 kubelet[2858]: E0509 00:13:44.645251 2858 projected.go:200] Error preparing data for projected volume kube-api-access-f6gll for pod kube-system/kube-proxy-k5l55: failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.645363 kubelet[2858]: E0509 00:13:44.645336 2858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6bffb258-f090-4cc7-bbfd-04965d8552e6-kube-api-access-f6gll podName:6bffb258-f090-4cc7-bbfd-04965d8552e6 nodeName:}" failed. No retries permitted until 2025-05-09 00:13:45.14528759 +0000 UTC m=+16.289513466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f6gll" (UniqueName: "kubernetes.io/projected/6bffb258-f090-4cc7-bbfd-04965d8552e6-kube-api-access-f6gll") pod "kube-proxy-k5l55" (UID: "6bffb258-f090-4cc7-bbfd-04965d8552e6") : failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.821878 kubelet[2858]: E0509 00:13:44.821802 2858 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.821878 kubelet[2858]: E0509 00:13:44.821871 2858 projected.go:200] Error preparing data for projected volume kube-api-access-4sxff for pod kube-system/cilium-operator-599987898-g8djb: failed to sync configmap cache: timed out waiting for the condition May 9 00:13:44.822138 kubelet[2858]: E0509 00:13:44.821985 2858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/789409de-01e7-47e9-940b-9208b464f021-kube-api-access-4sxff podName:789409de-01e7-47e9-940b-9208b464f021 nodeName:}" failed. No retries permitted until 2025-05-09 00:13:45.32195841 +0000 UTC m=+16.466184295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4sxff" (UniqueName: "kubernetes.io/projected/789409de-01e7-47e9-940b-9208b464f021-kube-api-access-4sxff") pod "cilium-operator-599987898-g8djb" (UID: "789409de-01e7-47e9-940b-9208b464f021") : failed to sync configmap cache: timed out waiting for the condition May 9 00:13:45.284257 containerd[1517]: time="2025-05-09T00:13:45.284111948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5l55,Uid:6bffb258-f090-4cc7-bbfd-04965d8552e6,Namespace:kube-system,Attempt:0,}" May 9 00:13:45.292276 containerd[1517]: time="2025-05-09T00:13:45.291702105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b84b5,Uid:8dcedc0e-11a9-42de-b292-9e0db07cf3f3,Namespace:kube-system,Attempt:0,}" May 9 00:13:45.347282 containerd[1517]: time="2025-05-09T00:13:45.347132593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:13:45.348114 containerd[1517]: time="2025-05-09T00:13:45.348050135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:13:45.348114 containerd[1517]: time="2025-05-09T00:13:45.348072168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:45.349663 containerd[1517]: time="2025-05-09T00:13:45.349436262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:45.354119 containerd[1517]: time="2025-05-09T00:13:45.354050662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:13:45.356494 containerd[1517]: time="2025-05-09T00:13:45.356324303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:13:45.356494 containerd[1517]: time="2025-05-09T00:13:45.356340825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:45.356494 containerd[1517]: time="2025-05-09T00:13:45.356434342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:45.368451 systemd[1]: Started cri-containerd-ec3fe6818218de71b7b6f5bc6a739696a589d41ba2894ac558b782bc9c54af18.scope - libcontainer container ec3fe6818218de71b7b6f5bc6a739696a589d41ba2894ac558b782bc9c54af18. May 9 00:13:45.371819 systemd[1]: Started cri-containerd-414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e.scope - libcontainer container 414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e. May 9 00:13:45.396377 containerd[1517]: time="2025-05-09T00:13:45.396166509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5l55,Uid:6bffb258-f090-4cc7-bbfd-04965d8552e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec3fe6818218de71b7b6f5bc6a739696a589d41ba2894ac558b782bc9c54af18\"" May 9 00:13:45.400051 containerd[1517]: time="2025-05-09T00:13:45.399901819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b84b5,Uid:8dcedc0e-11a9-42de-b292-9e0db07cf3f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\"" May 9 00:13:45.401550 containerd[1517]: time="2025-05-09T00:13:45.401518932Z" level=info msg="CreateContainer within sandbox \"ec3fe6818218de71b7b6f5bc6a739696a589d41ba2894ac558b782bc9c54af18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:13:45.402601 containerd[1517]: time="2025-05-09T00:13:45.402353387Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 00:13:45.415901 containerd[1517]: time="2025-05-09T00:13:45.415871403Z" level=info msg="CreateContainer within sandbox \"ec3fe6818218de71b7b6f5bc6a739696a589d41ba2894ac558b782bc9c54af18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62e82ff7562d55c9fcd0abcb29b67cceba4a1ddb5d1e0a4500c3651170e80749\"" May 9 00:13:45.416223 containerd[1517]: time="2025-05-09T00:13:45.416180727Z" level=info msg="StartContainer for \"62e82ff7562d55c9fcd0abcb29b67cceba4a1ddb5d1e0a4500c3651170e80749\"" May 9 00:13:45.437439 systemd[1]: Started cri-containerd-62e82ff7562d55c9fcd0abcb29b67cceba4a1ddb5d1e0a4500c3651170e80749.scope - libcontainer container 62e82ff7562d55c9fcd0abcb29b67cceba4a1ddb5d1e0a4500c3651170e80749. May 9 00:13:45.458716 containerd[1517]: time="2025-05-09T00:13:45.458687552Z" level=info msg="StartContainer for \"62e82ff7562d55c9fcd0abcb29b67cceba4a1ddb5d1e0a4500c3651170e80749\" returns successfully" May 9 00:13:45.462800 containerd[1517]: time="2025-05-09T00:13:45.462568777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g8djb,Uid:789409de-01e7-47e9-940b-9208b464f021,Namespace:kube-system,Attempt:0,}" May 9 00:13:45.478380 containerd[1517]: time="2025-05-09T00:13:45.478124841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:13:45.478380 containerd[1517]: time="2025-05-09T00:13:45.478175707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:13:45.478380 containerd[1517]: time="2025-05-09T00:13:45.478189673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:45.478380 containerd[1517]: time="2025-05-09T00:13:45.478255127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:13:45.495416 systemd[1]: Started cri-containerd-b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248.scope - libcontainer container b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248. May 9 00:13:45.529621 containerd[1517]: time="2025-05-09T00:13:45.529515304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g8djb,Uid:789409de-01e7-47e9-940b-9208b464f021,Namespace:kube-system,Attempt:0,} returns sandbox id \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\"" May 9 00:13:49.858609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924266249.mount: Deactivated successfully. May 9 00:13:51.079685 containerd[1517]: time="2025-05-09T00:13:51.079554154Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:51.081063 containerd[1517]: time="2025-05-09T00:13:51.080752907Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 9 00:13:51.081063 containerd[1517]: time="2025-05-09T00:13:51.080895626Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:51.082111 containerd[1517]: time="2025-05-09T00:13:51.082020910Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.679638298s" May 9 00:13:51.082111 containerd[1517]: time="2025-05-09T00:13:51.082044365Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 9 00:13:51.083834 containerd[1517]: time="2025-05-09T00:13:51.083264638Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 00:13:51.084801 containerd[1517]: time="2025-05-09T00:13:51.084463942Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:13:51.136455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498684110.mount: Deactivated successfully. May 9 00:13:51.187261 containerd[1517]: time="2025-05-09T00:13:51.187183434Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\"" May 9 00:13:51.188954 containerd[1517]: time="2025-05-09T00:13:51.188060520Z" level=info msg="StartContainer for \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\"" May 9 00:13:51.298421 systemd[1]: Started cri-containerd-58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2.scope - libcontainer container 58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2. May 9 00:13:51.318798 containerd[1517]: time="2025-05-09T00:13:51.318769865Z" level=info msg="StartContainer for \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\" returns successfully" May 9 00:13:51.327484 systemd[1]: cri-containerd-58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2.scope: Deactivated successfully. May 9 00:13:51.433665 containerd[1517]: time="2025-05-09T00:13:51.410954822Z" level=info msg="shim disconnected" id=58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2 namespace=k8s.io May 9 00:13:51.433665 containerd[1517]: time="2025-05-09T00:13:51.433559811Z" level=warning msg="cleaning up after shim disconnected" id=58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2 namespace=k8s.io May 9 00:13:51.433665 containerd[1517]: time="2025-05-09T00:13:51.433570381Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:13:52.020811 containerd[1517]: time="2025-05-09T00:13:52.020549826Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:13:52.041663 containerd[1517]: time="2025-05-09T00:13:52.041596724Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\"" May 9 00:13:52.044653 containerd[1517]: time="2025-05-09T00:13:52.044502749Z" level=info msg="StartContainer for \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\"" May 9 00:13:52.050409 kubelet[2858]: I0509 00:13:52.049276 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k5l55" podStartSLOduration=9.049213091 podStartE2EDuration="9.049213091s" podCreationTimestamp="2025-05-09 00:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:13:46.00290455 +0000 UTC m=+17.147130405" watchObservedRunningTime="2025-05-09 00:13:52.049213091 +0000 UTC m=+23.193438976" May 9 00:13:52.083504 systemd[1]: Started cri-containerd-59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869.scope - libcontainer container 59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869. May 9 00:13:52.112534 containerd[1517]: time="2025-05-09T00:13:52.112371599Z" level=info msg="StartContainer for \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\" returns successfully" May 9 00:13:52.126482 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:13:52.127026 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:13:52.127191 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 00:13:52.132706 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:13:52.135161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2-rootfs.mount: Deactivated successfully. May 9 00:13:52.136283 systemd[1]: cri-containerd-59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869.scope: Deactivated successfully. May 9 00:13:52.147838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869-rootfs.mount: Deactivated successfully. May 9 00:13:52.156370 containerd[1517]: time="2025-05-09T00:13:52.156318032Z" level=info msg="shim disconnected" id=59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869 namespace=k8s.io May 9 00:13:52.156370 containerd[1517]: time="2025-05-09T00:13:52.156361543Z" level=warning msg="cleaning up after shim disconnected" id=59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869 namespace=k8s.io May 9 00:13:52.156370 containerd[1517]: time="2025-05-09T00:13:52.156369317Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:13:52.170286 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:13:52.171336 containerd[1517]: time="2025-05-09T00:13:52.170786712Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:13:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:13:53.020363 containerd[1517]: time="2025-05-09T00:13:53.020249808Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:13:53.047937 containerd[1517]: time="2025-05-09T00:13:53.047868392Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\"" May 9 00:13:53.049594 containerd[1517]: time="2025-05-09T00:13:53.048358477Z" level=info msg="StartContainer for \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\"" May 9 00:13:53.075516 systemd[1]: Started cri-containerd-25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85.scope - libcontainer container 25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85. May 9 00:13:53.099729 containerd[1517]: time="2025-05-09T00:13:53.099669051Z" level=info msg="StartContainer for \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\" returns successfully" May 9 00:13:53.100714 systemd[1]: cri-containerd-25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85.scope: Deactivated successfully. May 9 00:13:53.119970 containerd[1517]: time="2025-05-09T00:13:53.119912432Z" level=info msg="shim disconnected" id=25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85 namespace=k8s.io May 9 00:13:53.120267 containerd[1517]: time="2025-05-09T00:13:53.119970470Z" level=warning msg="cleaning up after shim disconnected" id=25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85 namespace=k8s.io May 9 00:13:53.120267 containerd[1517]: time="2025-05-09T00:13:53.119982274Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:13:53.133957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85-rootfs.mount: Deactivated successfully. May 9 00:13:54.027678 containerd[1517]: time="2025-05-09T00:13:54.026635845Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:13:54.056810 containerd[1517]: time="2025-05-09T00:13:54.056715383Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\"" May 9 00:13:54.057580 containerd[1517]: time="2025-05-09T00:13:54.057239642Z" level=info msg="StartContainer for \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\"" May 9 00:13:54.098495 systemd[1]: Started cri-containerd-8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93.scope - libcontainer container 8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93. May 9 00:13:54.125203 systemd[1]: cri-containerd-8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93.scope: Deactivated successfully. May 9 00:13:54.128113 containerd[1517]: time="2025-05-09T00:13:54.127391239Z" level=info msg="StartContainer for \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\" returns successfully" May 9 00:13:54.141794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93-rootfs.mount: Deactivated successfully. May 9 00:13:54.149495 containerd[1517]: time="2025-05-09T00:13:54.149414942Z" level=info msg="shim disconnected" id=8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93 namespace=k8s.io May 9 00:13:54.149495 containerd[1517]: time="2025-05-09T00:13:54.149475015Z" level=warning msg="cleaning up after shim disconnected" id=8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93 namespace=k8s.io May 9 00:13:54.149495 containerd[1517]: time="2025-05-09T00:13:54.149481737Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:13:54.370510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175188518.mount: Deactivated successfully. May 9 00:13:55.028023 containerd[1517]: time="2025-05-09T00:13:55.027913659Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:13:55.064454 containerd[1517]: time="2025-05-09T00:13:55.064390142Z" level=info msg="CreateContainer within sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\"" May 9 00:13:55.065329 containerd[1517]: time="2025-05-09T00:13:55.065251528Z" level=info msg="StartContainer for \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\"" May 9 00:13:55.089497 systemd[1]: Started cri-containerd-00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186.scope - libcontainer container 00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186. May 9 00:13:55.113905 containerd[1517]: time="2025-05-09T00:13:55.113770753Z" level=info msg="StartContainer for \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\" returns successfully" May 9 00:13:55.280385 kubelet[2858]: I0509 00:13:55.280151 2858 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 00:13:55.299403 kubelet[2858]: I0509 00:13:55.299240 2858 topology_manager.go:215] "Topology Admit Handler" podUID="ab613844-7b25-48b5-9dcc-e1dd0bb4b52e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5zhwk" May 9 00:13:55.305356 kubelet[2858]: I0509 00:13:55.303372 2858 topology_manager.go:215] "Topology Admit Handler" podUID="9f83bf86-f9df-456d-a818-2660b15755cc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v7qgk" May 9 00:13:55.313183 systemd[1]: Created slice kubepods-burstable-podab613844_7b25_48b5_9dcc_e1dd0bb4b52e.slice - libcontainer container kubepods-burstable-podab613844_7b25_48b5_9dcc_e1dd0bb4b52e.slice. May 9 00:13:55.324170 systemd[1]: Created slice kubepods-burstable-pod9f83bf86_f9df_456d_a818_2660b15755cc.slice - libcontainer container kubepods-burstable-pod9f83bf86_f9df_456d_a818_2660b15755cc.slice. May 9 00:13:55.387513 kubelet[2858]: I0509 00:13:55.387282 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab613844-7b25-48b5-9dcc-e1dd0bb4b52e-config-volume\") pod \"coredns-7db6d8ff4d-5zhwk\" (UID: \"ab613844-7b25-48b5-9dcc-e1dd0bb4b52e\") " pod="kube-system/coredns-7db6d8ff4d-5zhwk" May 9 00:13:55.387513 kubelet[2858]: I0509 00:13:55.387382 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl44v\" (UniqueName: \"kubernetes.io/projected/ab613844-7b25-48b5-9dcc-e1dd0bb4b52e-kube-api-access-zl44v\") pod \"coredns-7db6d8ff4d-5zhwk\" (UID: \"ab613844-7b25-48b5-9dcc-e1dd0bb4b52e\") " pod="kube-system/coredns-7db6d8ff4d-5zhwk" May 9 00:13:55.387513 kubelet[2858]: I0509 00:13:55.387401 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f83bf86-f9df-456d-a818-2660b15755cc-config-volume\") pod \"coredns-7db6d8ff4d-v7qgk\" (UID: \"9f83bf86-f9df-456d-a818-2660b15755cc\") " pod="kube-system/coredns-7db6d8ff4d-v7qgk" May 9 00:13:55.387774 kubelet[2858]: I0509 00:13:55.387488 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctvd9\" (UniqueName: \"kubernetes.io/projected/9f83bf86-f9df-456d-a818-2660b15755cc-kube-api-access-ctvd9\") pod \"coredns-7db6d8ff4d-v7qgk\" (UID: \"9f83bf86-f9df-456d-a818-2660b15755cc\") " pod="kube-system/coredns-7db6d8ff4d-v7qgk" May 9 00:13:55.621149 containerd[1517]: time="2025-05-09T00:13:55.621034532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5zhwk,Uid:ab613844-7b25-48b5-9dcc-e1dd0bb4b52e,Namespace:kube-system,Attempt:0,}" May 9 00:13:55.628019 containerd[1517]: time="2025-05-09T00:13:55.627987706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7qgk,Uid:9f83bf86-f9df-456d-a818-2660b15755cc,Namespace:kube-system,Attempt:0,}" May 9 00:13:56.055360 kubelet[2858]: I0509 00:13:56.055261 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b84b5" podStartSLOduration=7.374471605 podStartE2EDuration="13.055234325s" podCreationTimestamp="2025-05-09 00:13:43 +0000 UTC" firstStartedPulling="2025-05-09 00:13:45.402044333 +0000 UTC m=+16.546270179" lastFinishedPulling="2025-05-09 00:13:51.082807054 +0000 UTC m=+22.227032899" observedRunningTime="2025-05-09 00:13:56.054087951 +0000 UTC m=+27.198313856" watchObservedRunningTime="2025-05-09 00:13:56.055234325 +0000 UTC m=+27.199460210" May 9 00:13:57.055576 containerd[1517]: time="2025-05-09T00:13:57.055506045Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:57.056505 containerd[1517]: time="2025-05-09T00:13:57.056399401Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 9 00:13:57.057554 containerd[1517]: time="2025-05-09T00:13:57.057513914Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:13:57.058686 containerd[1517]: time="2025-05-09T00:13:57.058420407Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.975118668s" May 9 00:13:57.058686 containerd[1517]: time="2025-05-09T00:13:57.058442949Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 9 00:13:57.060973 containerd[1517]: time="2025-05-09T00:13:57.060940804Z" level=info msg="CreateContainer within sandbox \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 00:13:57.076694 containerd[1517]: time="2025-05-09T00:13:57.076654074Z" level=info msg="CreateContainer within sandbox \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\"" May 9 00:13:57.077024 containerd[1517]: time="2025-05-09T00:13:57.076961685Z" level=info msg="StartContainer for \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\"" May 9 00:13:57.098417 systemd[1]: Started cri-containerd-9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6.scope - libcontainer container 9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6. May 9 00:13:57.116882 containerd[1517]: time="2025-05-09T00:13:57.116848074Z" level=info msg="StartContainer for \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\" returns successfully" May 9 00:13:58.052103 kubelet[2858]: I0509 00:13:58.052023 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-g8djb" podStartSLOduration=3.52370687 podStartE2EDuration="15.051963778s" podCreationTimestamp="2025-05-09 00:13:43 +0000 UTC" firstStartedPulling="2025-05-09 00:13:45.53072123 +0000 UTC m=+16.674947075" lastFinishedPulling="2025-05-09 00:13:57.058978139 +0000 UTC m=+28.203203983" observedRunningTime="2025-05-09 00:13:58.051160531 +0000 UTC m=+29.195386437" watchObservedRunningTime="2025-05-09 00:13:58.051963778 +0000 UTC m=+29.196189673" May 9 00:14:01.105678 systemd-networkd[1430]: cilium_host: Link UP May 9 00:14:01.105937 systemd-networkd[1430]: cilium_net: Link UP May 9 00:14:01.106265 systemd-networkd[1430]: cilium_net: Gained carrier May 9 00:14:01.109247 systemd-networkd[1430]: cilium_host: Gained carrier May 9 00:14:01.190997 systemd-networkd[1430]: cilium_vxlan: Link UP May 9 00:14:01.191003 systemd-networkd[1430]: cilium_vxlan: Gained carrier May 9 00:14:01.558576 systemd-networkd[1430]: cilium_net: Gained IPv6LL May 9 00:14:01.580419 kernel: NET: Registered PF_ALG protocol family May 9 00:14:02.071366 systemd-networkd[1430]: cilium_host: Gained IPv6LL May 9 00:14:02.089264 systemd-networkd[1430]: lxc_health: Link UP May 9 00:14:02.094528 systemd-networkd[1430]: lxc_health: Gained carrier May 9 00:14:02.189438 systemd-networkd[1430]: lxc93058d2f1d37: Link UP May 9 00:14:02.195328 kernel: eth0: renamed from tmpf7d9b May 9 00:14:02.200513 systemd-networkd[1430]: lxc93058d2f1d37: Gained carrier May 9 00:14:02.202538 systemd-networkd[1430]: lxc4db4a5f5ad85: Link UP May 9 00:14:02.209471 kernel: eth0: renamed from tmp47cce May 9 00:14:02.216709 systemd-networkd[1430]: lxc4db4a5f5ad85: Gained carrier May 9 00:14:02.774761 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL May 9 00:14:03.670452 systemd-networkd[1430]: lxc93058d2f1d37: Gained IPv6LL May 9 00:14:03.734591 systemd-networkd[1430]: lxc4db4a5f5ad85: Gained IPv6LL May 9 00:14:04.054593 systemd-networkd[1430]: lxc_health: Gained IPv6LL May 9 00:14:05.377586 containerd[1517]: time="2025-05-09T00:14:05.376812209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:05.377586 containerd[1517]: time="2025-05-09T00:14:05.377117776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:05.377586 containerd[1517]: time="2025-05-09T00:14:05.377134507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:05.378580 containerd[1517]: time="2025-05-09T00:14:05.377479528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:05.391045 containerd[1517]: time="2025-05-09T00:14:05.390939896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:05.391045 containerd[1517]: time="2025-05-09T00:14:05.390997976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:05.391045 containerd[1517]: time="2025-05-09T00:14:05.391011732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:05.399352 containerd[1517]: time="2025-05-09T00:14:05.392684239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:05.414733 systemd[1]: Started cri-containerd-47cce06275af470cdd00f825b1faa13466b53be8ee55e05bd3d93ec57b032723.scope - libcontainer container 47cce06275af470cdd00f825b1faa13466b53be8ee55e05bd3d93ec57b032723. May 9 00:14:05.427532 systemd[1]: Started cri-containerd-f7d9bf05fb44d15c01e2f416fe3ce943d667b4ab6ee6d041d5ab197c7372f9f5.scope - libcontainer container f7d9bf05fb44d15c01e2f416fe3ce943d667b4ab6ee6d041d5ab197c7372f9f5. May 9 00:14:05.490669 containerd[1517]: time="2025-05-09T00:14:05.490601709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5zhwk,Uid:ab613844-7b25-48b5-9dcc-e1dd0bb4b52e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7d9bf05fb44d15c01e2f416fe3ce943d667b4ab6ee6d041d5ab197c7372f9f5\"" May 9 00:14:05.493508 containerd[1517]: time="2025-05-09T00:14:05.493291407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7qgk,Uid:9f83bf86-f9df-456d-a818-2660b15755cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"47cce06275af470cdd00f825b1faa13466b53be8ee55e05bd3d93ec57b032723\"" May 9 00:14:05.495799 containerd[1517]: time="2025-05-09T00:14:05.495658695Z" level=info msg="CreateContainer within sandbox \"f7d9bf05fb44d15c01e2f416fe3ce943d667b4ab6ee6d041d5ab197c7372f9f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:14:05.498112 containerd[1517]: time="2025-05-09T00:14:05.497994013Z" level=info msg="CreateContainer within sandbox \"47cce06275af470cdd00f825b1faa13466b53be8ee55e05bd3d93ec57b032723\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:14:05.513013 containerd[1517]: time="2025-05-09T00:14:05.512994158Z" level=info msg="CreateContainer within sandbox \"47cce06275af470cdd00f825b1faa13466b53be8ee55e05bd3d93ec57b032723\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f2994631f33bd7d47c0dc2ecac0c3db15bcc43ffb19492802c7a84a7db721d4\"" May 9 00:14:05.514221 containerd[1517]: time="2025-05-09T00:14:05.514046835Z" level=info msg="CreateContainer within sandbox \"f7d9bf05fb44d15c01e2f416fe3ce943d667b4ab6ee6d041d5ab197c7372f9f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"042f585567a85cea88dcc2e43c907ddaf225cc599b1afc2fcdd9b1cd479e1444\"" May 9 00:14:05.514884 containerd[1517]: time="2025-05-09T00:14:05.514348715Z" level=info msg="StartContainer for \"042f585567a85cea88dcc2e43c907ddaf225cc599b1afc2fcdd9b1cd479e1444\"" May 9 00:14:05.514884 containerd[1517]: time="2025-05-09T00:14:05.514454635Z" level=info msg="StartContainer for \"3f2994631f33bd7d47c0dc2ecac0c3db15bcc43ffb19492802c7a84a7db721d4\"" May 9 00:14:05.541477 systemd[1]: Started cri-containerd-042f585567a85cea88dcc2e43c907ddaf225cc599b1afc2fcdd9b1cd479e1444.scope - libcontainer container 042f585567a85cea88dcc2e43c907ddaf225cc599b1afc2fcdd9b1cd479e1444. May 9 00:14:05.555434 systemd[1]: Started cri-containerd-3f2994631f33bd7d47c0dc2ecac0c3db15bcc43ffb19492802c7a84a7db721d4.scope - libcontainer container 3f2994631f33bd7d47c0dc2ecac0c3db15bcc43ffb19492802c7a84a7db721d4. May 9 00:14:05.577930 containerd[1517]: time="2025-05-09T00:14:05.577902449Z" level=info msg="StartContainer for \"042f585567a85cea88dcc2e43c907ddaf225cc599b1afc2fcdd9b1cd479e1444\" returns successfully" May 9 00:14:05.590270 containerd[1517]: time="2025-05-09T00:14:05.590145530Z" level=info msg="StartContainer for \"3f2994631f33bd7d47c0dc2ecac0c3db15bcc43ffb19492802c7a84a7db721d4\" returns successfully" May 9 00:14:06.084509 kubelet[2858]: I0509 00:14:06.084270 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5zhwk" podStartSLOduration=23.084251871 podStartE2EDuration="23.084251871s" podCreationTimestamp="2025-05-09 00:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:14:06.082811962 +0000 UTC m=+37.227037827" watchObservedRunningTime="2025-05-09 00:14:06.084251871 +0000 UTC m=+37.228477736" May 9 00:14:06.084894 kubelet[2858]: I0509 00:14:06.084592 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v7qgk" podStartSLOduration=23.084587685 podStartE2EDuration="23.084587685s" podCreationTimestamp="2025-05-09 00:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:14:06.072317603 +0000 UTC m=+37.216543448" watchObservedRunningTime="2025-05-09 00:14:06.084587685 +0000 UTC m=+37.228813540" May 9 00:14:11.552186 kubelet[2858]: I0509 00:14:11.551940 2858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:14:55.593464 systemd[1]: Started sshd@7-157.180.45.97:22-85.208.84.5:59804.service - OpenSSH per-connection server daemon (85.208.84.5:59804). May 9 00:14:55.960734 sshd[4234]: Invalid user admin from 85.208.84.5 port 59804 May 9 00:14:56.000050 sshd[4234]: Connection closed by invalid user admin 85.208.84.5 port 59804 [preauth] May 9 00:14:56.002831 systemd[1]: sshd@7-157.180.45.97:22-85.208.84.5:59804.service: Deactivated successfully. May 9 00:15:51.555004 update_engine[1498]: I20250509 00:15:51.554930 1498 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 9 00:15:51.555004 update_engine[1498]: I20250509 00:15:51.554989 1498 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 9 00:15:51.556872 update_engine[1498]: I20250509 00:15:51.556835 1498 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 9 00:15:51.557862 update_engine[1498]: I20250509 00:15:51.557423 1498 omaha_request_params.cc:62] Current group set to stable May 9 00:15:51.557862 update_engine[1498]: I20250509 00:15:51.557581 1498 update_attempter.cc:499] Already updated boot flags. Skipping. May 9 00:15:51.557862 update_engine[1498]: I20250509 00:15:51.557596 1498 update_attempter.cc:643] Scheduling an action processor start. May 9 00:15:51.557862 update_engine[1498]: I20250509 00:15:51.557616 1498 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 9 00:15:51.557862 update_engine[1498]: I20250509 00:15:51.557653 1498 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 9 00:15:51.557862 update_engine[1498]: I20250509 00:15:51.557705 1498 omaha_request_action.cc:271] Posting an Omaha request to disabled May 9 00:15:51.557862 update_engine[1498]: I20250509 00:15:51.557713 1498 omaha_request_action.cc:272] Request: May 9 00:15:51.557862 update_engine[1498]: May 9 00:15:51.557862 update_engine[1498]: May 9 00:15:51.557862 update_engine[1498]: May 9 00:15:51.557862 update_engine[1498]: May 9 00:15:51.557862 update_engine[1498]: May 9 00:15:51.557862 update_engine[1498]: May 9 00:15:51.557862 update_engine[1498]: May 9 00:15:51.557862 update_engine[1498]: May 9 00:15:51.557862 update_engine[1498]: I20250509 00:15:51.557718 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 00:15:51.565599 update_engine[1498]: I20250509 00:15:51.565572 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 00:15:51.566047 update_engine[1498]: I20250509 00:15:51.565979 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 00:15:51.566624 locksmithd[1539]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 9 00:15:51.567564 update_engine[1498]: E20250509 00:15:51.567510 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 00:15:51.567623 update_engine[1498]: I20250509 00:15:51.567602 1498 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 9 00:16:01.436282 update_engine[1498]: I20250509 00:16:01.436168 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 00:16:01.437333 update_engine[1498]: I20250509 00:16:01.437261 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 00:16:01.437714 update_engine[1498]: I20250509 00:16:01.437662 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 00:16:01.438415 update_engine[1498]: E20250509 00:16:01.438246 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 00:16:01.438503 update_engine[1498]: I20250509 00:16:01.438477 1498 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 9 00:16:11.437086 update_engine[1498]: I20250509 00:16:11.436966 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 00:16:11.441868 update_engine[1498]: I20250509 00:16:11.437436 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 00:16:11.441868 update_engine[1498]: I20250509 00:16:11.438047 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 00:16:11.441868 update_engine[1498]: E20250509 00:16:11.438529 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 00:16:11.441868 update_engine[1498]: I20250509 00:16:11.438596 1498 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 9 00:16:21.437420 update_engine[1498]: I20250509 00:16:21.437291 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 00:16:21.437976 update_engine[1498]: I20250509 00:16:21.437674 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 00:16:21.438108 update_engine[1498]: I20250509 00:16:21.438061 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 00:16:21.438557 update_engine[1498]: E20250509 00:16:21.438498 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 00:16:21.438649 update_engine[1498]: I20250509 00:16:21.438578 1498 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 9 00:16:21.438649 update_engine[1498]: I20250509 00:16:21.438601 1498 omaha_request_action.cc:617] Omaha request response: May 9 00:16:21.438760 update_engine[1498]: E20250509 00:16:21.438728 1498 omaha_request_action.cc:636] Omaha request network transfer failed. May 9 00:16:21.438825 update_engine[1498]: I20250509 00:16:21.438771 1498 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 9 00:16:21.438825 update_engine[1498]: I20250509 00:16:21.438779 1498 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 9 00:16:21.438825 update_engine[1498]: I20250509 00:16:21.438786 1498 update_attempter.cc:306] Processing Done. May 9 00:16:21.438825 update_engine[1498]: E20250509 00:16:21.438807 1498 update_attempter.cc:619] Update failed. May 9 00:16:21.438825 update_engine[1498]: I20250509 00:16:21.438814 1498 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 9 00:16:21.438825 update_engine[1498]: I20250509 00:16:21.438822 1498 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 9 00:16:21.439174 update_engine[1498]: I20250509 00:16:21.438828 1498 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 9 00:16:21.439174 update_engine[1498]: I20250509 00:16:21.438933 1498 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 9 00:16:21.439174 update_engine[1498]: I20250509 00:16:21.438978 1498 omaha_request_action.cc:271] Posting an Omaha request to disabled May 9 00:16:21.439174 update_engine[1498]: I20250509 00:16:21.438992 1498 omaha_request_action.cc:272] Request: May 9 00:16:21.439174 update_engine[1498]: May 9 00:16:21.439174 update_engine[1498]: May 9 00:16:21.439174 update_engine[1498]: May 9 00:16:21.439174 update_engine[1498]: May 9 00:16:21.439174 update_engine[1498]: May 9 00:16:21.439174 update_engine[1498]: May 9 00:16:21.439174 update_engine[1498]: I20250509 00:16:21.439002 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 00:16:21.439714 update_engine[1498]: I20250509 00:16:21.439266 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 00:16:21.439714 update_engine[1498]: I20250509 00:16:21.439671 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 00:16:21.440227 update_engine[1498]: E20250509 00:16:21.440013 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 00:16:21.440227 update_engine[1498]: I20250509 00:16:21.440076 1498 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 9 00:16:21.440227 update_engine[1498]: I20250509 00:16:21.440088 1498 omaha_request_action.cc:617] Omaha request response: May 9 00:16:21.440227 update_engine[1498]: I20250509 00:16:21.440096 1498 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 9 00:16:21.440227 update_engine[1498]: I20250509 00:16:21.440104 1498 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 9 00:16:21.440227 update_engine[1498]: I20250509 00:16:21.440109 1498 update_attempter.cc:306] Processing Done. May 9 00:16:21.440227 update_engine[1498]: I20250509 00:16:21.440117 1498 update_attempter.cc:310] Error event sent. May 9 00:16:21.440227 update_engine[1498]: I20250509 00:16:21.440130 1498 update_check_scheduler.cc:74] Next update check in 48m16s May 9 00:16:21.441261 locksmithd[1539]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 9 00:16:21.441261 locksmithd[1539]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 9 00:18:07.219162 systemd[1]: Started sshd@8-157.180.45.97:22-139.178.68.195:37560.service - OpenSSH per-connection server daemon (139.178.68.195:37560). May 9 00:18:08.220152 sshd[4269]: Accepted publickey for core from 139.178.68.195 port 37560 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:08.222227 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:08.226992 systemd-logind[1495]: New session 8 of user core. May 9 00:18:08.232694 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:18:09.377783 sshd[4271]: Connection closed by 139.178.68.195 port 37560 May 9 00:18:09.378916 sshd-session[4269]: pam_unix(sshd:session): session closed for user core May 9 00:18:09.384293 systemd-logind[1495]: Session 8 logged out. Waiting for processes to exit. May 9 00:18:09.385419 systemd[1]: sshd@8-157.180.45.97:22-139.178.68.195:37560.service: Deactivated successfully. May 9 00:18:09.389503 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:18:09.392046 systemd-logind[1495]: Removed session 8. May 9 00:18:14.544663 systemd[1]: Started sshd@9-157.180.45.97:22-139.178.68.195:37568.service - OpenSSH per-connection server daemon (139.178.68.195:37568). May 9 00:18:15.512976 sshd[4283]: Accepted publickey for core from 139.178.68.195 port 37568 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:15.515129 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:15.522950 systemd-logind[1495]: New session 9 of user core. May 9 00:18:15.529974 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:18:16.287969 sshd[4285]: Connection closed by 139.178.68.195 port 37568 May 9 00:18:16.288630 sshd-session[4283]: pam_unix(sshd:session): session closed for user core May 9 00:18:16.292620 systemd[1]: sshd@9-157.180.45.97:22-139.178.68.195:37568.service: Deactivated successfully. May 9 00:18:16.295244 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:18:16.296985 systemd-logind[1495]: Session 9 logged out. Waiting for processes to exit. May 9 00:18:16.298675 systemd-logind[1495]: Removed session 9. May 9 00:18:21.460214 systemd[1]: Started sshd@10-157.180.45.97:22-139.178.68.195:49956.service - OpenSSH per-connection server daemon (139.178.68.195:49956). May 9 00:18:22.429382 sshd[4299]: Accepted publickey for core from 139.178.68.195 port 49956 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:22.431410 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:22.438946 systemd-logind[1495]: New session 10 of user core. May 9 00:18:22.444584 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:18:23.188335 sshd[4301]: Connection closed by 139.178.68.195 port 49956 May 9 00:18:23.189083 sshd-session[4299]: pam_unix(sshd:session): session closed for user core May 9 00:18:23.193923 systemd-logind[1495]: Session 10 logged out. Waiting for processes to exit. May 9 00:18:23.194140 systemd[1]: sshd@10-157.180.45.97:22-139.178.68.195:49956.service: Deactivated successfully. May 9 00:18:23.196226 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:18:23.198127 systemd-logind[1495]: Removed session 10. May 9 00:18:23.361822 systemd[1]: Started sshd@11-157.180.45.97:22-139.178.68.195:49966.service - OpenSSH per-connection server daemon (139.178.68.195:49966). May 9 00:18:24.347682 sshd[4314]: Accepted publickey for core from 139.178.68.195 port 49966 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:24.349327 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:24.354087 systemd-logind[1495]: New session 11 of user core. May 9 00:18:24.359451 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:18:25.135186 sshd[4316]: Connection closed by 139.178.68.195 port 49966 May 9 00:18:25.136285 sshd-session[4314]: pam_unix(sshd:session): session closed for user core May 9 00:18:25.139729 systemd-logind[1495]: Session 11 logged out. Waiting for processes to exit. May 9 00:18:25.139881 systemd[1]: sshd@11-157.180.45.97:22-139.178.68.195:49966.service: Deactivated successfully. May 9 00:18:25.141808 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:18:25.143051 systemd-logind[1495]: Removed session 11. May 9 00:18:25.300400 systemd[1]: Started sshd@12-157.180.45.97:22-139.178.68.195:59484.service - OpenSSH per-connection server daemon (139.178.68.195:59484). May 9 00:18:26.271662 sshd[4325]: Accepted publickey for core from 139.178.68.195 port 59484 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:26.274756 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:26.282005 systemd-logind[1495]: New session 12 of user core. May 9 00:18:26.291649 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:18:27.039905 sshd[4327]: Connection closed by 139.178.68.195 port 59484 May 9 00:18:27.040766 sshd-session[4325]: pam_unix(sshd:session): session closed for user core May 9 00:18:27.044232 systemd[1]: sshd@12-157.180.45.97:22-139.178.68.195:59484.service: Deactivated successfully. May 9 00:18:27.046621 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:18:27.049013 systemd-logind[1495]: Session 12 logged out. Waiting for processes to exit. May 9 00:18:27.051060 systemd-logind[1495]: Removed session 12. May 9 00:18:32.206657 systemd[1]: Started sshd@13-157.180.45.97:22-139.178.68.195:59500.service - OpenSSH per-connection server daemon (139.178.68.195:59500). May 9 00:18:33.178888 sshd[4340]: Accepted publickey for core from 139.178.68.195 port 59500 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:33.181166 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:33.189146 systemd-logind[1495]: New session 13 of user core. May 9 00:18:33.193624 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:18:33.937891 sshd[4342]: Connection closed by 139.178.68.195 port 59500 May 9 00:18:33.939178 sshd-session[4340]: pam_unix(sshd:session): session closed for user core May 9 00:18:33.943480 systemd[1]: sshd@13-157.180.45.97:22-139.178.68.195:59500.service: Deactivated successfully. May 9 00:18:33.946144 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:18:33.948187 systemd-logind[1495]: Session 13 logged out. Waiting for processes to exit. May 9 00:18:33.950359 systemd-logind[1495]: Removed session 13. May 9 00:18:34.104964 systemd[1]: Started sshd@14-157.180.45.97:22-139.178.68.195:59508.service - OpenSSH per-connection server daemon (139.178.68.195:59508). May 9 00:18:35.087111 sshd[4353]: Accepted publickey for core from 139.178.68.195 port 59508 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:35.088680 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:35.093475 systemd-logind[1495]: New session 14 of user core. May 9 00:18:35.097459 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:18:36.121067 sshd[4355]: Connection closed by 139.178.68.195 port 59508 May 9 00:18:36.122760 sshd-session[4353]: pam_unix(sshd:session): session closed for user core May 9 00:18:36.130147 systemd[1]: sshd@14-157.180.45.97:22-139.178.68.195:59508.service: Deactivated successfully. May 9 00:18:36.133071 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:18:36.134695 systemd-logind[1495]: Session 14 logged out. Waiting for processes to exit. May 9 00:18:36.136401 systemd-logind[1495]: Removed session 14. May 9 00:18:36.293717 systemd[1]: Started sshd@15-157.180.45.97:22-139.178.68.195:41810.service - OpenSSH per-connection server daemon (139.178.68.195:41810). May 9 00:18:37.291697 sshd[4365]: Accepted publickey for core from 139.178.68.195 port 41810 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:37.294234 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:37.301506 systemd-logind[1495]: New session 15 of user core. May 9 00:18:37.308606 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:18:39.568010 sshd[4367]: Connection closed by 139.178.68.195 port 41810 May 9 00:18:39.568596 sshd-session[4365]: pam_unix(sshd:session): session closed for user core May 9 00:18:39.575120 systemd-logind[1495]: Session 15 logged out. Waiting for processes to exit. May 9 00:18:39.575644 systemd[1]: sshd@15-157.180.45.97:22-139.178.68.195:41810.service: Deactivated successfully. May 9 00:18:39.577400 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:18:39.578364 systemd-logind[1495]: Removed session 15. May 9 00:18:39.732968 systemd[1]: Started sshd@16-157.180.45.97:22-139.178.68.195:41824.service - OpenSSH per-connection server daemon (139.178.68.195:41824). May 9 00:18:40.699392 sshd[4383]: Accepted publickey for core from 139.178.68.195 port 41824 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:40.700985 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:40.705267 systemd-logind[1495]: New session 16 of user core. May 9 00:18:40.711471 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:18:41.533867 sshd[4385]: Connection closed by 139.178.68.195 port 41824 May 9 00:18:41.534565 sshd-session[4383]: pam_unix(sshd:session): session closed for user core May 9 00:18:41.537892 systemd-logind[1495]: Session 16 logged out. Waiting for processes to exit. May 9 00:18:41.538039 systemd[1]: sshd@16-157.180.45.97:22-139.178.68.195:41824.service: Deactivated successfully. May 9 00:18:41.539611 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:18:41.540476 systemd-logind[1495]: Removed session 16. May 9 00:18:41.699651 systemd[1]: Started sshd@17-157.180.45.97:22-139.178.68.195:41838.service - OpenSSH per-connection server daemon (139.178.68.195:41838). May 9 00:18:42.668463 sshd[4394]: Accepted publickey for core from 139.178.68.195 port 41838 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:42.669990 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:42.674812 systemd-logind[1495]: New session 17 of user core. May 9 00:18:42.680449 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:18:43.389240 sshd[4396]: Connection closed by 139.178.68.195 port 41838 May 9 00:18:43.389804 sshd-session[4394]: pam_unix(sshd:session): session closed for user core May 9 00:18:43.393081 systemd[1]: sshd@17-157.180.45.97:22-139.178.68.195:41838.service: Deactivated successfully. May 9 00:18:43.395743 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:18:43.396440 systemd-logind[1495]: Session 17 logged out. Waiting for processes to exit. May 9 00:18:43.397455 systemd-logind[1495]: Removed session 17. May 9 00:18:48.556375 systemd[1]: Started sshd@18-157.180.45.97:22-139.178.68.195:46288.service - OpenSSH per-connection server daemon (139.178.68.195:46288). May 9 00:18:49.531657 sshd[4411]: Accepted publickey for core from 139.178.68.195 port 46288 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:49.532978 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:49.537768 systemd-logind[1495]: New session 18 of user core. May 9 00:18:49.542532 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:18:50.264812 sshd[4413]: Connection closed by 139.178.68.195 port 46288 May 9 00:18:50.265350 sshd-session[4411]: pam_unix(sshd:session): session closed for user core May 9 00:18:50.268177 systemd[1]: sshd@18-157.180.45.97:22-139.178.68.195:46288.service: Deactivated successfully. May 9 00:18:50.269696 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:18:50.270407 systemd-logind[1495]: Session 18 logged out. Waiting for processes to exit. May 9 00:18:50.271632 systemd-logind[1495]: Removed session 18. May 9 00:18:55.437722 systemd[1]: Started sshd@19-157.180.45.97:22-139.178.68.195:48924.service - OpenSSH per-connection server daemon (139.178.68.195:48924). May 9 00:18:56.428770 sshd[4424]: Accepted publickey for core from 139.178.68.195 port 48924 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:56.430823 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:56.438473 systemd-logind[1495]: New session 19 of user core. May 9 00:18:56.444519 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:18:57.152521 sshd[4426]: Connection closed by 139.178.68.195 port 48924 May 9 00:18:57.153090 sshd-session[4424]: pam_unix(sshd:session): session closed for user core May 9 00:18:57.155701 systemd[1]: sshd@19-157.180.45.97:22-139.178.68.195:48924.service: Deactivated successfully. May 9 00:18:57.157216 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:18:57.157895 systemd-logind[1495]: Session 19 logged out. Waiting for processes to exit. May 9 00:18:57.158890 systemd-logind[1495]: Removed session 19. May 9 00:18:57.327735 systemd[1]: Started sshd@20-157.180.45.97:22-139.178.68.195:48940.service - OpenSSH per-connection server daemon (139.178.68.195:48940). May 9 00:18:58.303508 sshd[4437]: Accepted publickey for core from 139.178.68.195 port 48940 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:18:58.305494 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:58.312873 systemd-logind[1495]: New session 20 of user core. May 9 00:18:58.316509 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:19:00.180488 containerd[1517]: time="2025-05-09T00:19:00.180427533Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:19:00.181681 containerd[1517]: time="2025-05-09T00:19:00.181641304Z" level=info msg="StopContainer for \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\" with timeout 30 (s)" May 9 00:19:00.183793 containerd[1517]: time="2025-05-09T00:19:00.183694097Z" level=info msg="Stop container \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\" with signal terminated" May 9 00:19:00.187819 containerd[1517]: time="2025-05-09T00:19:00.187750842Z" level=info msg="StopContainer for \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\" with timeout 2 (s)" May 9 00:19:00.188036 containerd[1517]: time="2025-05-09T00:19:00.188023065Z" level=info msg="Stop container \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\" with signal terminated" May 9 00:19:00.195622 systemd[1]: cri-containerd-9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6.scope: Deactivated successfully. May 9 00:19:00.199831 systemd-networkd[1430]: lxc_health: Link DOWN May 9 00:19:00.199836 systemd-networkd[1430]: lxc_health: Lost carrier May 9 00:19:00.218568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6-rootfs.mount: Deactivated successfully. May 9 00:19:00.222738 systemd[1]: cri-containerd-00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186.scope: Deactivated successfully. May 9 00:19:00.222904 systemd[1]: cri-containerd-00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186.scope: Consumed 6.564s CPU time. May 9 00:19:00.234588 containerd[1517]: time="2025-05-09T00:19:00.234522939Z" level=info msg="shim disconnected" id=9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6 namespace=k8s.io May 9 00:19:00.234588 containerd[1517]: time="2025-05-09T00:19:00.234581810Z" level=warning msg="cleaning up after shim disconnected" id=9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6 namespace=k8s.io May 9 00:19:00.234588 containerd[1517]: time="2025-05-09T00:19:00.234590867Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:00.240560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186-rootfs.mount: Deactivated successfully. May 9 00:19:00.249848 containerd[1517]: time="2025-05-09T00:19:00.249802591Z" level=info msg="StopContainer for \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\" returns successfully" May 9 00:19:00.250508 containerd[1517]: time="2025-05-09T00:19:00.250467956Z" level=info msg="StopPodSandbox for \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\"" May 9 00:19:00.251119 containerd[1517]: time="2025-05-09T00:19:00.250997014Z" level=info msg="shim disconnected" id=00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186 namespace=k8s.io May 9 00:19:00.251119 containerd[1517]: time="2025-05-09T00:19:00.251032291Z" level=warning msg="cleaning up after shim disconnected" id=00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186 namespace=k8s.io May 9 00:19:00.251119 containerd[1517]: time="2025-05-09T00:19:00.251040597Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:00.251578 containerd[1517]: time="2025-05-09T00:19:00.251531774Z" level=info msg="Container to stop \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:00.254507 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248-shm.mount: Deactivated successfully. May 9 00:19:00.259972 systemd[1]: cri-containerd-b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248.scope: Deactivated successfully. May 9 00:19:00.269202 containerd[1517]: time="2025-05-09T00:19:00.269168073Z" level=info msg="StopContainer for \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\" returns successfully" May 9 00:19:00.269561 containerd[1517]: time="2025-05-09T00:19:00.269536908Z" level=info msg="StopPodSandbox for \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\"" May 9 00:19:00.269598 containerd[1517]: time="2025-05-09T00:19:00.269562738Z" level=info msg="Container to stop \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:00.269598 containerd[1517]: time="2025-05-09T00:19:00.269589458Z" level=info msg="Container to stop \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:00.269647 containerd[1517]: time="2025-05-09T00:19:00.269596972Z" level=info msg="Container to stop \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:00.269647 containerd[1517]: time="2025-05-09T00:19:00.269615567Z" level=info msg="Container to stop \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:00.269647 containerd[1517]: time="2025-05-09T00:19:00.269622460Z" level=info msg="Container to stop \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:00.271007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e-shm.mount: Deactivated successfully. May 9 00:19:00.276832 systemd[1]: cri-containerd-414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e.scope: Deactivated successfully. May 9 00:19:00.289064 containerd[1517]: time="2025-05-09T00:19:00.288782435Z" level=info msg="shim disconnected" id=b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248 namespace=k8s.io May 9 00:19:00.289064 containerd[1517]: time="2025-05-09T00:19:00.288827519Z" level=warning msg="cleaning up after shim disconnected" id=b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248 namespace=k8s.io May 9 00:19:00.289064 containerd[1517]: time="2025-05-09T00:19:00.288858838Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:00.297104 containerd[1517]: time="2025-05-09T00:19:00.297056087Z" level=info msg="shim disconnected" id=414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e namespace=k8s.io May 9 00:19:00.297104 containerd[1517]: time="2025-05-09T00:19:00.297100650Z" level=warning msg="cleaning up after shim disconnected" id=414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e namespace=k8s.io May 9 00:19:00.297104 containerd[1517]: time="2025-05-09T00:19:00.297107813Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:00.304125 containerd[1517]: time="2025-05-09T00:19:00.304070622Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:19:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:19:00.305011 containerd[1517]: time="2025-05-09T00:19:00.304985699Z" level=info msg="TearDown network for sandbox \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\" successfully" May 9 00:19:00.305011 containerd[1517]: time="2025-05-09T00:19:00.305005046Z" level=info msg="StopPodSandbox for \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\" returns successfully" May 9 00:19:00.309528 containerd[1517]: time="2025-05-09T00:19:00.309507462Z" level=info msg="TearDown network for sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" successfully" May 9 00:19:00.309528 containerd[1517]: time="2025-05-09T00:19:00.309524935Z" level=info msg="StopPodSandbox for \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" returns successfully" May 9 00:19:00.410439 kubelet[2858]: I0509 00:19:00.410401 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-xtables-lock\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.410947 kubelet[2858]: I0509 00:19:00.410928 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-hostproc\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411663 kubelet[2858]: I0509 00:19:00.411013 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/789409de-01e7-47e9-940b-9208b464f021-cilium-config-path\") pod \"789409de-01e7-47e9-940b-9208b464f021\" (UID: \"789409de-01e7-47e9-940b-9208b464f021\") " May 9 00:19:00.411663 kubelet[2858]: I0509 00:19:00.411040 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cni-path\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411663 kubelet[2858]: I0509 00:19:00.411058 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg4rk\" (UniqueName: \"kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-kube-api-access-fg4rk\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411663 kubelet[2858]: I0509 00:19:00.411075 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-run\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411663 kubelet[2858]: I0509 00:19:00.411089 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-cgroup\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411663 kubelet[2858]: I0509 00:19:00.411103 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sxff\" (UniqueName: \"kubernetes.io/projected/789409de-01e7-47e9-940b-9208b464f021-kube-api-access-4sxff\") pod \"789409de-01e7-47e9-940b-9208b464f021\" (UID: \"789409de-01e7-47e9-940b-9208b464f021\") " May 9 00:19:00.411827 kubelet[2858]: I0509 00:19:00.411119 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-config-path\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411827 kubelet[2858]: I0509 00:19:00.411150 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-clustermesh-secrets\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411827 kubelet[2858]: I0509 00:19:00.411165 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-etc-cni-netd\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411827 kubelet[2858]: I0509 00:19:00.411179 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-host-proc-sys-kernel\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411827 kubelet[2858]: I0509 00:19:00.411194 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-lib-modules\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411827 kubelet[2858]: I0509 00:19:00.411206 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-host-proc-sys-net\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411969 kubelet[2858]: I0509 00:19:00.411220 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-bpf-maps\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411969 kubelet[2858]: I0509 00:19:00.411235 2858 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-hubble-tls\") pod \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\" (UID: \"8dcedc0e-11a9-42de-b292-9e0db07cf3f3\") " May 9 00:19:00.411969 kubelet[2858]: I0509 00:19:00.410512 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.420068 kubelet[2858]: I0509 00:19:00.419830 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:19:00.420276 kubelet[2858]: I0509 00:19:00.420236 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.422186 kubelet[2858]: I0509 00:19:00.422166 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 9 00:19:00.422274 kubelet[2858]: I0509 00:19:00.422260 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.422493 kubelet[2858]: I0509 00:19:00.422374 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.422493 kubelet[2858]: I0509 00:19:00.422397 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.422493 kubelet[2858]: I0509 00:19:00.422408 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.422493 kubelet[2858]: I0509 00:19:00.422420 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.422625 kubelet[2858]: I0509 00:19:00.422551 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.423444 kubelet[2858]: I0509 00:19:00.423389 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.423444 kubelet[2858]: I0509 00:19:00.423421 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:00.424700 kubelet[2858]: I0509 00:19:00.424646 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789409de-01e7-47e9-940b-9208b464f021-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "789409de-01e7-47e9-940b-9208b464f021" (UID: "789409de-01e7-47e9-940b-9208b464f021"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:19:00.425130 kubelet[2858]: I0509 00:19:00.425093 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:19:00.426989 kubelet[2858]: I0509 00:19:00.426890 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-kube-api-access-fg4rk" (OuterVolumeSpecName: "kube-api-access-fg4rk") pod "8dcedc0e-11a9-42de-b292-9e0db07cf3f3" (UID: "8dcedc0e-11a9-42de-b292-9e0db07cf3f3"). InnerVolumeSpecName "kube-api-access-fg4rk". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:19:00.426989 kubelet[2858]: I0509 00:19:00.426952 2858 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789409de-01e7-47e9-940b-9208b464f021-kube-api-access-4sxff" (OuterVolumeSpecName: "kube-api-access-4sxff") pod "789409de-01e7-47e9-940b-9208b464f021" (UID: "789409de-01e7-47e9-940b-9208b464f021"). InnerVolumeSpecName "kube-api-access-4sxff". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:19:00.514700 kubelet[2858]: I0509 00:19:00.514284 2858 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-xtables-lock\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.514700 kubelet[2858]: I0509 00:19:00.514369 2858 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-hostproc\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.514700 kubelet[2858]: I0509 00:19:00.514387 2858 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/789409de-01e7-47e9-940b-9208b464f021-cilium-config-path\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.514700 kubelet[2858]: I0509 00:19:00.514403 2858 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cni-path\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.514700 kubelet[2858]: I0509 00:19:00.514417 2858 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fg4rk\" (UniqueName: \"kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-kube-api-access-fg4rk\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.514700 kubelet[2858]: I0509 00:19:00.514433 2858 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-run\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.514700 kubelet[2858]: I0509 00:19:00.514447 2858 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-cgroup\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.514700 kubelet[2858]: I0509 00:19:00.514460 2858 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4sxff\" (UniqueName: \"kubernetes.io/projected/789409de-01e7-47e9-940b-9208b464f021-kube-api-access-4sxff\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.515103 kubelet[2858]: I0509 00:19:00.514472 2858 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-clustermesh-secrets\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.515103 kubelet[2858]: I0509 00:19:00.514487 2858 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-etc-cni-netd\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.515103 kubelet[2858]: I0509 00:19:00.514500 2858 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-cilium-config-path\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.515103 kubelet[2858]: I0509 00:19:00.514516 2858 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-host-proc-sys-kernel\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.515103 kubelet[2858]: I0509 00:19:00.514532 2858 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-lib-modules\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.515103 kubelet[2858]: I0509 00:19:00.514544 2858 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-host-proc-sys-net\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.515103 kubelet[2858]: I0509 00:19:00.514558 2858 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-bpf-maps\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.515103 kubelet[2858]: I0509 00:19:00.514574 2858 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8dcedc0e-11a9-42de-b292-9e0db07cf3f3-hubble-tls\") on node \"ci-4152-2-3-n-8b48d2c086\" DevicePath \"\"" May 9 00:19:00.706008 systemd[1]: Removed slice kubepods-burstable-pod8dcedc0e_11a9_42de_b292_9e0db07cf3f3.slice - libcontainer container kubepods-burstable-pod8dcedc0e_11a9_42de_b292_9e0db07cf3f3.slice. May 9 00:19:00.706139 systemd[1]: kubepods-burstable-pod8dcedc0e_11a9_42de_b292_9e0db07cf3f3.slice: Consumed 6.637s CPU time. May 9 00:19:00.723315 kubelet[2858]: I0509 00:19:00.723261 2858 scope.go:117] "RemoveContainer" containerID="00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186" May 9 00:19:00.730094 systemd[1]: Removed slice kubepods-besteffort-pod789409de_01e7_47e9_940b_9208b464f021.slice - libcontainer container kubepods-besteffort-pod789409de_01e7_47e9_940b_9208b464f021.slice. May 9 00:19:00.734077 containerd[1517]: time="2025-05-09T00:19:00.733731796Z" level=info msg="RemoveContainer for \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\"" May 9 00:19:00.737015 containerd[1517]: time="2025-05-09T00:19:00.736977511Z" level=info msg="RemoveContainer for \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\" returns successfully" May 9 00:19:00.737748 kubelet[2858]: I0509 00:19:00.737369 2858 scope.go:117] "RemoveContainer" containerID="8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93" May 9 00:19:00.740660 containerd[1517]: time="2025-05-09T00:19:00.740483526Z" level=info msg="RemoveContainer for \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\"" May 9 00:19:00.744858 containerd[1517]: time="2025-05-09T00:19:00.744819960Z" level=info msg="RemoveContainer for \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\" returns successfully" May 9 00:19:00.745161 kubelet[2858]: I0509 00:19:00.745036 2858 scope.go:117] "RemoveContainer" containerID="25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85" May 9 00:19:00.746808 containerd[1517]: time="2025-05-09T00:19:00.746554321Z" level=info msg="RemoveContainer for \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\"" May 9 00:19:00.753618 containerd[1517]: time="2025-05-09T00:19:00.753018219Z" level=info msg="RemoveContainer for \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\" returns successfully" May 9 00:19:00.755445 kubelet[2858]: I0509 00:19:00.755331 2858 scope.go:117] "RemoveContainer" containerID="59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869" May 9 00:19:00.762891 containerd[1517]: time="2025-05-09T00:19:00.762840873Z" level=info msg="RemoveContainer for \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\"" May 9 00:19:00.765461 containerd[1517]: time="2025-05-09T00:19:00.765354697Z" level=info msg="RemoveContainer for \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\" returns successfully" May 9 00:19:00.766230 kubelet[2858]: I0509 00:19:00.765797 2858 scope.go:117] "RemoveContainer" containerID="58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2" May 9 00:19:00.766540 containerd[1517]: time="2025-05-09T00:19:00.766519925Z" level=info msg="RemoveContainer for \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\"" May 9 00:19:00.768842 containerd[1517]: time="2025-05-09T00:19:00.768807643Z" level=info msg="RemoveContainer for \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\" returns successfully" May 9 00:19:00.769900 kubelet[2858]: I0509 00:19:00.769047 2858 scope.go:117] "RemoveContainer" containerID="00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186" May 9 00:19:00.769973 containerd[1517]: time="2025-05-09T00:19:00.769216073Z" level=error msg="ContainerStatus for \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\": not found" May 9 00:19:00.772219 kubelet[2858]: E0509 00:19:00.770883 2858 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\": not found" containerID="00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186" May 9 00:19:00.772219 kubelet[2858]: I0509 00:19:00.772116 2858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186"} err="failed to get container status \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\": rpc error: code = NotFound desc = an error occurred when try to find container \"00f77ef3b9fcb235c34a6ea569b0427834aa191f83f8c847472e237534eda186\": not found" May 9 00:19:00.772219 kubelet[2858]: I0509 00:19:00.772195 2858 scope.go:117] "RemoveContainer" containerID="8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93" May 9 00:19:00.772477 containerd[1517]: time="2025-05-09T00:19:00.772443283Z" level=error msg="ContainerStatus for \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\": not found" May 9 00:19:00.773846 kubelet[2858]: E0509 00:19:00.772685 2858 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\": not found" containerID="8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93" May 9 00:19:00.773846 kubelet[2858]: I0509 00:19:00.772706 2858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93"} err="failed to get container status \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a56c2db8807cf02225498f98e4a8cb21de96a0284e506c3365d34046d442c93\": not found" May 9 00:19:00.773846 kubelet[2858]: I0509 00:19:00.772718 2858 scope.go:117] "RemoveContainer" containerID="25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85" May 9 00:19:00.773846 kubelet[2858]: E0509 00:19:00.772972 2858 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\": not found" containerID="25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85" May 9 00:19:00.773846 kubelet[2858]: I0509 00:19:00.772985 2858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85"} err="failed to get container status \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\": rpc error: code = NotFound desc = an error occurred when try to find container \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\": not found" May 9 00:19:00.773846 kubelet[2858]: I0509 00:19:00.772995 2858 scope.go:117] "RemoveContainer" containerID="59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869" May 9 00:19:00.774063 containerd[1517]: time="2025-05-09T00:19:00.772891128Z" level=error msg="ContainerStatus for \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25a67c8a378a7bc8447efe87bf5bd40d6c0de29b84bbfed4011cea25175a1f85\": not found" May 9 00:19:00.774063 containerd[1517]: time="2025-05-09T00:19:00.773158493Z" level=error msg="ContainerStatus for \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\": not found" May 9 00:19:00.774133 kubelet[2858]: E0509 00:19:00.773252 2858 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\": not found" containerID="59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869" May 9 00:19:00.774133 kubelet[2858]: I0509 00:19:00.773775 2858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869"} err="failed to get container status \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\": rpc error: code = NotFound desc = an error occurred when try to find container \"59d87a85360d916aa63249a398e56a8084d35763473bdde2a687b0a583d46869\": not found" May 9 00:19:00.774133 kubelet[2858]: I0509 00:19:00.773802 2858 scope.go:117] "RemoveContainer" containerID="58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2" May 9 00:19:00.775716 containerd[1517]: time="2025-05-09T00:19:00.774312510Z" level=error msg="ContainerStatus for \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\": not found" May 9 00:19:00.775775 kubelet[2858]: E0509 00:19:00.774505 2858 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\": not found" containerID="58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2" May 9 00:19:00.775775 kubelet[2858]: I0509 00:19:00.774549 2858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2"} err="failed to get container status \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"58ea4445266e0b54b7a75153ef5165bdb11e40c693c8f67d0eb0da1af14fcbb2\": not found" May 9 00:19:00.775775 kubelet[2858]: I0509 00:19:00.774563 2858 scope.go:117] "RemoveContainer" containerID="9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6" May 9 00:19:00.790797 containerd[1517]: time="2025-05-09T00:19:00.790760086Z" level=info msg="RemoveContainer for \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\"" May 9 00:19:00.794288 containerd[1517]: time="2025-05-09T00:19:00.794258518Z" level=info msg="RemoveContainer for \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\" returns successfully" May 9 00:19:00.794611 kubelet[2858]: I0509 00:19:00.794581 2858 scope.go:117] "RemoveContainer" containerID="9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6" May 9 00:19:00.794907 containerd[1517]: time="2025-05-09T00:19:00.794857558Z" level=error msg="ContainerStatus for \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\": not found" May 9 00:19:00.794997 kubelet[2858]: E0509 00:19:00.794979 2858 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\": not found" containerID="9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6" May 9 00:19:00.795048 kubelet[2858]: I0509 00:19:00.794999 2858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6"} err="failed to get container status \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b2bba679273c5ab0d102989bd0efb6496c52ffd0ce6e26374720a8dde7a87c6\": not found" May 9 00:19:00.929368 kubelet[2858]: I0509 00:19:00.929323 2858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789409de-01e7-47e9-940b-9208b464f021" path="/var/lib/kubelet/pods/789409de-01e7-47e9-940b-9208b464f021/volumes" May 9 00:19:00.929729 kubelet[2858]: I0509 00:19:00.929700 2858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dcedc0e-11a9-42de-b292-9e0db07cf3f3" path="/var/lib/kubelet/pods/8dcedc0e-11a9-42de-b292-9e0db07cf3f3/volumes" May 9 00:19:01.167521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248-rootfs.mount: Deactivated successfully. May 9 00:19:01.167651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e-rootfs.mount: Deactivated successfully. May 9 00:19:01.167710 systemd[1]: var-lib-kubelet-pods-789409de\x2d01e7\x2d47e9\x2d940b\x2d9208b464f021-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4sxff.mount: Deactivated successfully. May 9 00:19:01.167771 systemd[1]: var-lib-kubelet-pods-8dcedc0e\x2d11a9\x2d42de\x2db292\x2d9e0db07cf3f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfg4rk.mount: Deactivated successfully. May 9 00:19:01.167820 systemd[1]: var-lib-kubelet-pods-8dcedc0e\x2d11a9\x2d42de\x2db292\x2d9e0db07cf3f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 00:19:01.167867 systemd[1]: var-lib-kubelet-pods-8dcedc0e\x2d11a9\x2d42de\x2db292\x2d9e0db07cf3f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 00:19:02.247512 sshd[4439]: Connection closed by 139.178.68.195 port 48940 May 9 00:19:02.248113 sshd-session[4437]: pam_unix(sshd:session): session closed for user core May 9 00:19:02.250759 systemd[1]: sshd@20-157.180.45.97:22-139.178.68.195:48940.service: Deactivated successfully. May 9 00:19:02.252105 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:19:02.253193 systemd-logind[1495]: Session 20 logged out. Waiting for processes to exit. May 9 00:19:02.254531 systemd-logind[1495]: Removed session 20. May 9 00:19:02.414728 systemd[1]: Started sshd@21-157.180.45.97:22-139.178.68.195:48944.service - OpenSSH per-connection server daemon (139.178.68.195:48944). May 9 00:19:03.384980 sshd[4600]: Accepted publickey for core from 139.178.68.195 port 48944 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:19:03.386226 sshd-session[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:03.390650 systemd-logind[1495]: New session 21 of user core. May 9 00:19:03.396568 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:19:04.054950 kubelet[2858]: E0509 00:19:04.050669 2858 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:19:04.804331 kubelet[2858]: I0509 00:19:04.802874 2858 topology_manager.go:215] "Topology Admit Handler" podUID="2899062d-3af2-4c3b-966a-09b9fcd93869" podNamespace="kube-system" podName="cilium-wjwjl" May 9 00:19:04.804331 kubelet[2858]: E0509 00:19:04.802944 2858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="789409de-01e7-47e9-940b-9208b464f021" containerName="cilium-operator" May 9 00:19:04.804331 kubelet[2858]: E0509 00:19:04.802957 2858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8dcedc0e-11a9-42de-b292-9e0db07cf3f3" containerName="mount-cgroup" May 9 00:19:04.804331 kubelet[2858]: E0509 00:19:04.802966 2858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8dcedc0e-11a9-42de-b292-9e0db07cf3f3" containerName="apply-sysctl-overwrites" May 9 00:19:04.804331 kubelet[2858]: E0509 00:19:04.802975 2858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8dcedc0e-11a9-42de-b292-9e0db07cf3f3" containerName="cilium-agent" May 9 00:19:04.804331 kubelet[2858]: E0509 00:19:04.802983 2858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8dcedc0e-11a9-42de-b292-9e0db07cf3f3" containerName="mount-bpf-fs" May 9 00:19:04.804331 kubelet[2858]: E0509 00:19:04.802991 2858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8dcedc0e-11a9-42de-b292-9e0db07cf3f3" containerName="clean-cilium-state" May 9 00:19:04.804331 kubelet[2858]: I0509 00:19:04.803023 2858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcedc0e-11a9-42de-b292-9e0db07cf3f3" containerName="cilium-agent" May 9 00:19:04.804331 kubelet[2858]: I0509 00:19:04.803032 2858 memory_manager.go:354] "RemoveStaleState removing state" podUID="789409de-01e7-47e9-940b-9208b464f021" containerName="cilium-operator" May 9 00:19:04.839848 kubelet[2858]: I0509 00:19:04.839811 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2899062d-3af2-4c3b-966a-09b9fcd93869-cilium-ipsec-secrets\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841799 kubelet[2858]: I0509 00:19:04.840002 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-hostproc\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841799 kubelet[2858]: I0509 00:19:04.840038 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-host-proc-sys-net\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841799 kubelet[2858]: I0509 00:19:04.840062 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2899062d-3af2-4c3b-966a-09b9fcd93869-hubble-tls\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841799 kubelet[2858]: I0509 00:19:04.840085 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-xtables-lock\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841799 kubelet[2858]: I0509 00:19:04.840108 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftv5q\" (UniqueName: \"kubernetes.io/projected/2899062d-3af2-4c3b-966a-09b9fcd93869-kube-api-access-ftv5q\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841799 kubelet[2858]: I0509 00:19:04.840135 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-cilium-run\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841939 kubelet[2858]: I0509 00:19:04.840161 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-cilium-cgroup\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841939 kubelet[2858]: I0509 00:19:04.840187 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-host-proc-sys-kernel\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841939 kubelet[2858]: I0509 00:19:04.840200 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-bpf-maps\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841939 kubelet[2858]: I0509 00:19:04.840211 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-etc-cni-netd\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841939 kubelet[2858]: I0509 00:19:04.840222 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2899062d-3af2-4c3b-966a-09b9fcd93869-cilium-config-path\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.841939 kubelet[2858]: I0509 00:19:04.840234 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-cni-path\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.842045 kubelet[2858]: I0509 00:19:04.840245 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2899062d-3af2-4c3b-966a-09b9fcd93869-lib-modules\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.842045 kubelet[2858]: I0509 00:19:04.840257 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2899062d-3af2-4c3b-966a-09b9fcd93869-clustermesh-secrets\") pod \"cilium-wjwjl\" (UID: \"2899062d-3af2-4c3b-966a-09b9fcd93869\") " pod="kube-system/cilium-wjwjl" May 9 00:19:04.852901 systemd[1]: Created slice kubepods-burstable-pod2899062d_3af2_4c3b_966a_09b9fcd93869.slice - libcontainer container kubepods-burstable-pod2899062d_3af2_4c3b_966a_09b9fcd93869.slice. May 9 00:19:04.944204 sshd[4602]: Connection closed by 139.178.68.195 port 48944 May 9 00:19:04.946458 sshd-session[4600]: pam_unix(sshd:session): session closed for user core May 9 00:19:04.969143 systemd[1]: sshd@21-157.180.45.97:22-139.178.68.195:48944.service: Deactivated successfully. May 9 00:19:04.970936 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:19:04.971575 systemd-logind[1495]: Session 21 logged out. Waiting for processes to exit. May 9 00:19:04.973055 systemd-logind[1495]: Removed session 21. May 9 00:19:05.108856 systemd[1]: Started sshd@22-157.180.45.97:22-139.178.68.195:48958.service - OpenSSH per-connection server daemon (139.178.68.195:48958). May 9 00:19:05.158347 containerd[1517]: time="2025-05-09T00:19:05.158244424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjwjl,Uid:2899062d-3af2-4c3b-966a-09b9fcd93869,Namespace:kube-system,Attempt:0,}" May 9 00:19:05.181872 containerd[1517]: time="2025-05-09T00:19:05.181754606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:19:05.181872 containerd[1517]: time="2025-05-09T00:19:05.181847241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:19:05.182134 containerd[1517]: time="2025-05-09T00:19:05.181874492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:19:05.182134 containerd[1517]: time="2025-05-09T00:19:05.182031098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:19:05.203598 systemd[1]: Started cri-containerd-29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4.scope - libcontainer container 29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4. May 9 00:19:05.230404 containerd[1517]: time="2025-05-09T00:19:05.230349643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjwjl,Uid:2899062d-3af2-4c3b-966a-09b9fcd93869,Namespace:kube-system,Attempt:0,} returns sandbox id \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\"" May 9 00:19:05.240363 containerd[1517]: time="2025-05-09T00:19:05.240289329Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:19:05.254317 containerd[1517]: time="2025-05-09T00:19:05.254221619Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d88256ec8b6c96f9826cf3f33f283a631f1865f4aa31c7c966cb5683b91d1ea9\"" May 9 00:19:05.255406 containerd[1517]: time="2025-05-09T00:19:05.254768270Z" level=info msg="StartContainer for \"d88256ec8b6c96f9826cf3f33f283a631f1865f4aa31c7c966cb5683b91d1ea9\"" May 9 00:19:05.279555 systemd[1]: Started cri-containerd-d88256ec8b6c96f9826cf3f33f283a631f1865f4aa31c7c966cb5683b91d1ea9.scope - libcontainer container d88256ec8b6c96f9826cf3f33f283a631f1865f4aa31c7c966cb5683b91d1ea9. May 9 00:19:05.302057 containerd[1517]: time="2025-05-09T00:19:05.302009132Z" level=info msg="StartContainer for \"d88256ec8b6c96f9826cf3f33f283a631f1865f4aa31c7c966cb5683b91d1ea9\" returns successfully" May 9 00:19:05.314660 systemd[1]: cri-containerd-d88256ec8b6c96f9826cf3f33f283a631f1865f4aa31c7c966cb5683b91d1ea9.scope: Deactivated successfully. May 9 00:19:05.345850 containerd[1517]: time="2025-05-09T00:19:05.345580309Z" level=info msg="shim disconnected" id=d88256ec8b6c96f9826cf3f33f283a631f1865f4aa31c7c966cb5683b91d1ea9 namespace=k8s.io May 9 00:19:05.345850 containerd[1517]: time="2025-05-09T00:19:05.345662354Z" level=warning msg="cleaning up after shim disconnected" id=d88256ec8b6c96f9826cf3f33f283a631f1865f4aa31c7c966cb5683b91d1ea9 namespace=k8s.io May 9 00:19:05.345850 containerd[1517]: time="2025-05-09T00:19:05.345674597Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:05.452766 kubelet[2858]: I0509 00:19:05.452704 2858 setters.go:580] "Node became not ready" node="ci-4152-2-3-n-8b48d2c086" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T00:19:05Z","lastTransitionTime":"2025-05-09T00:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 00:19:05.755972 containerd[1517]: time="2025-05-09T00:19:05.755257848Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:19:05.769124 containerd[1517]: time="2025-05-09T00:19:05.769060003Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cf02f5c605e5ece6b544e4961b72d8f93ef1ce744878da55c8ae836f0a5fa90e\"" May 9 00:19:05.769675 containerd[1517]: time="2025-05-09T00:19:05.769571929Z" level=info msg="StartContainer for \"cf02f5c605e5ece6b544e4961b72d8f93ef1ce744878da55c8ae836f0a5fa90e\"" May 9 00:19:05.805429 systemd[1]: Started cri-containerd-cf02f5c605e5ece6b544e4961b72d8f93ef1ce744878da55c8ae836f0a5fa90e.scope - libcontainer container cf02f5c605e5ece6b544e4961b72d8f93ef1ce744878da55c8ae836f0a5fa90e. May 9 00:19:05.842170 containerd[1517]: time="2025-05-09T00:19:05.842086170Z" level=info msg="StartContainer for \"cf02f5c605e5ece6b544e4961b72d8f93ef1ce744878da55c8ae836f0a5fa90e\" returns successfully" May 9 00:19:05.853540 systemd[1]: cri-containerd-cf02f5c605e5ece6b544e4961b72d8f93ef1ce744878da55c8ae836f0a5fa90e.scope: Deactivated successfully. May 9 00:19:05.872967 containerd[1517]: time="2025-05-09T00:19:05.872901989Z" level=info msg="shim disconnected" id=cf02f5c605e5ece6b544e4961b72d8f93ef1ce744878da55c8ae836f0a5fa90e namespace=k8s.io May 9 00:19:05.872967 containerd[1517]: time="2025-05-09T00:19:05.872951261Z" level=warning msg="cleaning up after shim disconnected" id=cf02f5c605e5ece6b544e4961b72d8f93ef1ce744878da55c8ae836f0a5fa90e namespace=k8s.io May 9 00:19:05.872967 containerd[1517]: time="2025-05-09T00:19:05.872959727Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:06.085080 sshd[4616]: Accepted publickey for core from 139.178.68.195 port 48958 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:19:06.086483 sshd-session[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:06.090352 systemd-logind[1495]: New session 22 of user core. May 9 00:19:06.098422 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:19:06.757562 sshd[4780]: Connection closed by 139.178.68.195 port 48958 May 9 00:19:06.756947 sshd-session[4616]: pam_unix(sshd:session): session closed for user core May 9 00:19:06.759515 containerd[1517]: time="2025-05-09T00:19:06.758455281Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:19:06.762581 systemd-logind[1495]: Session 22 logged out. Waiting for processes to exit. May 9 00:19:06.763641 systemd[1]: sshd@22-157.180.45.97:22-139.178.68.195:48958.service: Deactivated successfully. May 9 00:19:06.767882 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:19:06.770792 systemd-logind[1495]: Removed session 22. May 9 00:19:06.787880 containerd[1517]: time="2025-05-09T00:19:06.787841040Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6016d1c720858b743588e0ee8f1d706547bc156bae12af89067a04fd7f9a8ca\"" May 9 00:19:06.788388 containerd[1517]: time="2025-05-09T00:19:06.788364759Z" level=info msg="StartContainer for \"b6016d1c720858b743588e0ee8f1d706547bc156bae12af89067a04fd7f9a8ca\"" May 9 00:19:06.810649 systemd[1]: Started cri-containerd-b6016d1c720858b743588e0ee8f1d706547bc156bae12af89067a04fd7f9a8ca.scope - libcontainer container b6016d1c720858b743588e0ee8f1d706547bc156bae12af89067a04fd7f9a8ca. May 9 00:19:06.840061 containerd[1517]: time="2025-05-09T00:19:06.839964516Z" level=info msg="StartContainer for \"b6016d1c720858b743588e0ee8f1d706547bc156bae12af89067a04fd7f9a8ca\" returns successfully" May 9 00:19:06.844849 systemd[1]: cri-containerd-b6016d1c720858b743588e0ee8f1d706547bc156bae12af89067a04fd7f9a8ca.scope: Deactivated successfully. May 9 00:19:06.864736 containerd[1517]: time="2025-05-09T00:19:06.864673521Z" level=info msg="shim disconnected" id=b6016d1c720858b743588e0ee8f1d706547bc156bae12af89067a04fd7f9a8ca namespace=k8s.io May 9 00:19:06.864736 containerd[1517]: time="2025-05-09T00:19:06.864728185Z" level=warning msg="cleaning up after shim disconnected" id=b6016d1c720858b743588e0ee8f1d706547bc156bae12af89067a04fd7f9a8ca namespace=k8s.io May 9 00:19:06.864736 containerd[1517]: time="2025-05-09T00:19:06.864735698Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:06.921283 systemd[1]: Started sshd@23-157.180.45.97:22-139.178.68.195:32798.service - OpenSSH per-connection server daemon (139.178.68.195:32798). May 9 00:19:06.945853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6016d1c720858b743588e0ee8f1d706547bc156bae12af89067a04fd7f9a8ca-rootfs.mount: Deactivated successfully. May 9 00:19:07.763887 containerd[1517]: time="2025-05-09T00:19:07.763811047Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:19:07.784921 containerd[1517]: time="2025-05-09T00:19:07.784533730Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa57c07628d2c48cc44c9822cb755b7bf7795b5ab6a743710ce241356e23b992\"" May 9 00:19:07.788409 containerd[1517]: time="2025-05-09T00:19:07.788220687Z" level=info msg="StartContainer for \"fa57c07628d2c48cc44c9822cb755b7bf7795b5ab6a743710ce241356e23b992\"" May 9 00:19:07.820425 systemd[1]: Started cri-containerd-fa57c07628d2c48cc44c9822cb755b7bf7795b5ab6a743710ce241356e23b992.scope - libcontainer container fa57c07628d2c48cc44c9822cb755b7bf7795b5ab6a743710ce241356e23b992. May 9 00:19:07.838730 systemd[1]: cri-containerd-fa57c07628d2c48cc44c9822cb755b7bf7795b5ab6a743710ce241356e23b992.scope: Deactivated successfully. May 9 00:19:07.840132 containerd[1517]: time="2025-05-09T00:19:07.838926979Z" level=info msg="StartContainer for \"fa57c07628d2c48cc44c9822cb755b7bf7795b5ab6a743710ce241356e23b992\" returns successfully" May 9 00:19:07.873953 containerd[1517]: time="2025-05-09T00:19:07.873903839Z" level=info msg="shim disconnected" id=fa57c07628d2c48cc44c9822cb755b7bf7795b5ab6a743710ce241356e23b992 namespace=k8s.io May 9 00:19:07.874321 containerd[1517]: time="2025-05-09T00:19:07.874185791Z" level=warning msg="cleaning up after shim disconnected" id=fa57c07628d2c48cc44c9822cb755b7bf7795b5ab6a743710ce241356e23b992 namespace=k8s.io May 9 00:19:07.874321 containerd[1517]: time="2025-05-09T00:19:07.874204636Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:07.891398 sshd[4841]: Accepted publickey for core from 139.178.68.195 port 32798 ssh2: RSA SHA256:kWmuzyOdL82NqCTDeKfCPqtPYuFTqtQu4IYGGTbCa4E May 9 00:19:07.893762 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:07.899908 systemd-logind[1495]: New session 23 of user core. May 9 00:19:07.903573 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 00:19:07.945567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa57c07628d2c48cc44c9822cb755b7bf7795b5ab6a743710ce241356e23b992-rootfs.mount: Deactivated successfully. May 9 00:19:08.763897 containerd[1517]: time="2025-05-09T00:19:08.763752750Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:19:08.782754 containerd[1517]: time="2025-05-09T00:19:08.780956358Z" level=info msg="CreateContainer within sandbox \"29dd4bbd86bbe73b0ed004f7ef8bea0588ce7d428b14937183b6791ca84e7fc4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1b1f038db0f2cf9a1e56b7f59500258e93ca69931ba117e8571f9e7a067e1157\"" May 9 00:19:08.783321 containerd[1517]: time="2025-05-09T00:19:08.782893784Z" level=info msg="StartContainer for \"1b1f038db0f2cf9a1e56b7f59500258e93ca69931ba117e8571f9e7a067e1157\"" May 9 00:19:08.807415 systemd[1]: Started cri-containerd-1b1f038db0f2cf9a1e56b7f59500258e93ca69931ba117e8571f9e7a067e1157.scope - libcontainer container 1b1f038db0f2cf9a1e56b7f59500258e93ca69931ba117e8571f9e7a067e1157. May 9 00:19:08.828604 containerd[1517]: time="2025-05-09T00:19:08.828564847Z" level=info msg="StartContainer for \"1b1f038db0f2cf9a1e56b7f59500258e93ca69931ba117e8571f9e7a067e1157\" returns successfully" May 9 00:19:09.231341 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 9 00:19:09.790320 kubelet[2858]: I0509 00:19:09.789689 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wjwjl" podStartSLOduration=5.789670866 podStartE2EDuration="5.789670866s" podCreationTimestamp="2025-05-09 00:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:19:09.788960913 +0000 UTC m=+340.933186779" watchObservedRunningTime="2025-05-09 00:19:09.789670866 +0000 UTC m=+340.933896721" May 9 00:19:10.711922 kubelet[2858]: E0509 00:19:10.711872 2858 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42656->127.0.0.1:45663: write tcp 127.0.0.1:42656->127.0.0.1:45663: write: broken pipe May 9 00:19:11.801581 systemd-networkd[1430]: lxc_health: Link UP May 9 00:19:11.808172 systemd-networkd[1430]: lxc_health: Gained carrier May 9 00:19:13.752717 systemd-networkd[1430]: lxc_health: Gained IPv6LL May 9 00:19:15.218666 systemd[1]: run-containerd-runc-k8s.io-1b1f038db0f2cf9a1e56b7f59500258e93ca69931ba117e8571f9e7a067e1157-runc.0g6Z0S.mount: Deactivated successfully. May 9 00:19:19.756253 sshd[4899]: Connection closed by 139.178.68.195 port 32798 May 9 00:19:19.757341 sshd-session[4841]: pam_unix(sshd:session): session closed for user core May 9 00:19:19.760538 systemd[1]: sshd@23-157.180.45.97:22-139.178.68.195:32798.service: Deactivated successfully. May 9 00:19:19.761886 systemd[1]: session-23.scope: Deactivated successfully. May 9 00:19:19.763961 systemd-logind[1495]: Session 23 logged out. Waiting for processes to exit. May 9 00:19:19.765692 systemd-logind[1495]: Removed session 23. May 9 00:19:28.937419 containerd[1517]: time="2025-05-09T00:19:28.937374476Z" level=info msg="StopPodSandbox for \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\"" May 9 00:19:28.941995 containerd[1517]: time="2025-05-09T00:19:28.937458684Z" level=info msg="TearDown network for sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" successfully" May 9 00:19:28.941995 containerd[1517]: time="2025-05-09T00:19:28.941986759Z" level=info msg="StopPodSandbox for \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" returns successfully" May 9 00:19:28.942543 containerd[1517]: time="2025-05-09T00:19:28.942491993Z" level=info msg="RemovePodSandbox for \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\"" May 9 00:19:28.942543 containerd[1517]: time="2025-05-09T00:19:28.942516129Z" level=info msg="Forcibly stopping sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\"" May 9 00:19:28.943352 containerd[1517]: time="2025-05-09T00:19:28.943015913Z" level=info msg="TearDown network for sandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" successfully" May 9 00:19:28.947596 containerd[1517]: time="2025-05-09T00:19:28.947572723Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:19:28.947654 containerd[1517]: time="2025-05-09T00:19:28.947617667Z" level=info msg="RemovePodSandbox \"414fe5482e172e944effad1803daddb61c718e93993ae30c6068055a2cb36c4e\" returns successfully" May 9 00:19:28.948064 containerd[1517]: time="2025-05-09T00:19:28.948023864Z" level=info msg="StopPodSandbox for \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\"" May 9 00:19:28.948110 containerd[1517]: time="2025-05-09T00:19:28.948096581Z" level=info msg="TearDown network for sandbox \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\" successfully" May 9 00:19:28.948110 containerd[1517]: time="2025-05-09T00:19:28.948110568Z" level=info msg="StopPodSandbox for \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\" returns successfully" May 9 00:19:28.948428 containerd[1517]: time="2025-05-09T00:19:28.948387281Z" level=info msg="RemovePodSandbox for \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\"" May 9 00:19:28.948492 containerd[1517]: time="2025-05-09T00:19:28.948429101Z" level=info msg="Forcibly stopping sandbox \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\"" May 9 00:19:28.948518 containerd[1517]: time="2025-05-09T00:19:28.948464146Z" level=info msg="TearDown network for sandbox \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\" successfully" May 9 00:19:28.955637 containerd[1517]: time="2025-05-09T00:19:28.955604076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:19:28.955718 containerd[1517]: time="2025-05-09T00:19:28.955649431Z" level=info msg="RemovePodSandbox \"b825fe20b8ea591c55f0874c74dec746451a4c194dfc7159cc7653a9191f8248\" returns successfully" May 9 00:19:35.776273 systemd[1]: cri-containerd-c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a.scope: Deactivated successfully. May 9 00:19:35.776844 systemd[1]: cri-containerd-c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a.scope: Consumed 1.338s CPU time, 16.8M memory peak, 0B memory swap peak. May 9 00:19:35.785194 kubelet[2858]: E0509 00:19:35.785087 2858 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34218->10.0.0.2:2379: read: connection timed out" May 9 00:19:35.798933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a-rootfs.mount: Deactivated successfully. May 9 00:19:35.807918 containerd[1517]: time="2025-05-09T00:19:35.807864004Z" level=info msg="shim disconnected" id=c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a namespace=k8s.io May 9 00:19:35.807918 containerd[1517]: time="2025-05-09T00:19:35.807912837Z" level=warning msg="cleaning up after shim disconnected" id=c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a namespace=k8s.io May 9 00:19:35.808224 containerd[1517]: time="2025-05-09T00:19:35.807921232Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:35.830036 kubelet[2858]: I0509 00:19:35.829994 2858 scope.go:117] "RemoveContainer" containerID="c4dfcd62c52f6a93facc1d56daa069aec46f24ea66b921de0e53ed91056e978a" May 9 00:19:35.832973 containerd[1517]: time="2025-05-09T00:19:35.832935709Z" level=info msg="CreateContainer within sandbox \"2e5cf8450fc64e43c532772d892a13af203af10cee9ac8154b1662e6fb6d5f42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 9 00:19:35.841610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638786763.mount: Deactivated successfully. May 9 00:19:35.846404 containerd[1517]: time="2025-05-09T00:19:35.846242196Z" level=info msg="CreateContainer within sandbox \"2e5cf8450fc64e43c532772d892a13af203af10cee9ac8154b1662e6fb6d5f42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a242f426053b22df6558a937ea403a3ab45fa1b87e7c1fadd6eb3351948f0aac\"" May 9 00:19:35.846815 containerd[1517]: time="2025-05-09T00:19:35.846633065Z" level=info msg="StartContainer for \"a242f426053b22df6558a937ea403a3ab45fa1b87e7c1fadd6eb3351948f0aac\"" May 9 00:19:35.864420 systemd[1]: Started cri-containerd-a242f426053b22df6558a937ea403a3ab45fa1b87e7c1fadd6eb3351948f0aac.scope - libcontainer container a242f426053b22df6558a937ea403a3ab45fa1b87e7c1fadd6eb3351948f0aac. May 9 00:19:35.902942 containerd[1517]: time="2025-05-09T00:19:35.902857852Z" level=info msg="StartContainer for \"a242f426053b22df6558a937ea403a3ab45fa1b87e7c1fadd6eb3351948f0aac\" returns successfully" May 9 00:19:36.424130 systemd[1]: cri-containerd-ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04.scope: Deactivated successfully. May 9 00:19:36.424357 systemd[1]: cri-containerd-ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04.scope: Consumed 5.209s CPU time, 22.9M memory peak, 0B memory swap peak. May 9 00:19:36.448838 containerd[1517]: time="2025-05-09T00:19:36.448732110Z" level=info msg="shim disconnected" id=ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04 namespace=k8s.io May 9 00:19:36.448838 containerd[1517]: time="2025-05-09T00:19:36.448834784Z" level=warning msg="cleaning up after shim disconnected" id=ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04 namespace=k8s.io May 9 00:19:36.448838 containerd[1517]: time="2025-05-09T00:19:36.448843611Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:36.799015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04-rootfs.mount: Deactivated successfully. May 9 00:19:36.835139 kubelet[2858]: I0509 00:19:36.835116 2858 scope.go:117] "RemoveContainer" containerID="ec79b46b09ed86a8d2a4f2a7345a08bffa4df7132b729e4debee05758b0e7f04" May 9 00:19:36.837203 containerd[1517]: time="2025-05-09T00:19:36.837160238Z" level=info msg="CreateContainer within sandbox \"a276fbaa00ee70592d622a5a8898d407a279fdb59676e262d6703bcba9002cdc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 9 00:19:36.853754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447340747.mount: Deactivated successfully. May 9 00:19:36.861788 containerd[1517]: time="2025-05-09T00:19:36.861746976Z" level=info msg="CreateContainer within sandbox \"a276fbaa00ee70592d622a5a8898d407a279fdb59676e262d6703bcba9002cdc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0c2535030dcdde2f7ffc62aa516dad0aa331d9a427497632dabfe674498553f1\"" May 9 00:19:36.862242 containerd[1517]: time="2025-05-09T00:19:36.862166948Z" level=info msg="StartContainer for \"0c2535030dcdde2f7ffc62aa516dad0aa331d9a427497632dabfe674498553f1\"" May 9 00:19:36.892488 systemd[1]: Started cri-containerd-0c2535030dcdde2f7ffc62aa516dad0aa331d9a427497632dabfe674498553f1.scope - libcontainer container 0c2535030dcdde2f7ffc62aa516dad0aa331d9a427497632dabfe674498553f1. May 9 00:19:36.932241 containerd[1517]: time="2025-05-09T00:19:36.932188971Z" level=info msg="StartContainer for \"0c2535030dcdde2f7ffc62aa516dad0aa331d9a427497632dabfe674498553f1\" returns successfully" May 9 00:19:37.798411 systemd[1]: run-containerd-runc-k8s.io-0c2535030dcdde2f7ffc62aa516dad0aa331d9a427497632dabfe674498553f1-runc.OV0Eq0.mount: Deactivated successfully. May 9 00:19:38.135867 kubelet[2858]: E0509 00:19:38.135708 2858 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34012->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-3-n-8b48d2c086.183db3d746f11c26 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-3-n-8b48d2c086,UID:c0b8eec11126708f73a20eabb114ce30,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-n-8b48d2c086,},FirstTimestamp:2025-05-09 00:19:27.679167526 +0000 UTC m=+358.823393400,LastTimestamp:2025-05-09 00:19:27.679167526 +0000 UTC m=+358.823393400,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-n-8b48d2c086,}"