Apr 30 00:25:14.826798 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:31:30 -00 2025 Apr 30 00:25:14.826818 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:25:14.826825 kernel: BIOS-provided physical RAM map: Apr 30 00:25:14.826831 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 00:25:14.826835 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 00:25:14.826840 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 00:25:14.826845 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Apr 30 00:25:14.826850 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Apr 30 00:25:14.826855 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 00:25:14.826860 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 00:25:14.826865 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 00:25:14.826869 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 00:25:14.826874 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 00:25:14.826878 kernel: NX (Execute Disable) protection: active Apr 30 00:25:14.826885 kernel: APIC: Static calls initialized Apr 30 00:25:14.826890 kernel: SMBIOS 3.0.0 present. Apr 30 00:25:14.826896 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 30 00:25:14.826901 kernel: Hypervisor detected: KVM Apr 30 00:25:14.826905 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 00:25:14.826910 kernel: kvm-clock: using sched offset of 2957338733 cycles Apr 30 00:25:14.826916 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 00:25:14.826921 kernel: tsc: Detected 2445.406 MHz processor Apr 30 00:25:14.826926 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 00:25:14.826933 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 00:25:14.826938 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Apr 30 00:25:14.826943 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 00:25:14.826948 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 00:25:14.826953 kernel: Using GB pages for direct mapping Apr 30 00:25:14.826958 kernel: ACPI: Early table checksum verification disabled Apr 30 00:25:14.826963 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Apr 30 00:25:14.826968 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:25:14.826973 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:25:14.826979 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:25:14.826985 kernel: ACPI: FACS 0x000000007CFE0000 000040 Apr 30 00:25:14.826990 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:25:14.826995 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:25:14.827000 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:25:14.827005 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:25:14.827038 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Apr 30 00:25:14.827043 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Apr 30 00:25:14.827052 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Apr 30 00:25:14.827058 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Apr 30 00:25:14.827063 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Apr 30 00:25:14.827068 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Apr 30 00:25:14.827074 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Apr 30 00:25:14.827079 kernel: No NUMA configuration found Apr 30 00:25:14.827085 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Apr 30 00:25:14.827091 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Apr 30 00:25:14.827096 kernel: Zone ranges: Apr 30 00:25:14.827102 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 00:25:14.827107 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Apr 30 00:25:14.827112 kernel: Normal empty Apr 30 00:25:14.827118 kernel: Movable zone start for each node Apr 30 00:25:14.827123 kernel: Early memory node ranges Apr 30 00:25:14.827128 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 00:25:14.827133 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Apr 30 00:25:14.827140 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Apr 30 00:25:14.827145 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 00:25:14.827150 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 00:25:14.827156 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 00:25:14.827161 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 00:25:14.827166 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 00:25:14.827171 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 00:25:14.827177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 00:25:14.827182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 00:25:14.827188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 00:25:14.827194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 00:25:14.827199 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 00:25:14.827204 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 00:25:14.827209 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 00:25:14.827215 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 00:25:14.827220 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 00:25:14.827225 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 00:25:14.827230 kernel: Booting paravirtualized kernel on KVM Apr 30 00:25:14.827237 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 00:25:14.827243 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 00:25:14.827248 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 00:25:14.827253 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 00:25:14.827258 kernel: pcpu-alloc: [0] 0 1 Apr 30 00:25:14.827264 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 00:25:14.827270 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:25:14.827275 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:25:14.827282 kernel: random: crng init done Apr 30 00:25:14.827287 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:25:14.827292 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 00:25:14.827298 kernel: Fallback order for Node 0: 0 Apr 30 00:25:14.827303 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Apr 30 00:25:14.827308 kernel: Policy zone: DMA32 Apr 30 00:25:14.827314 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:25:14.827319 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42992K init, 2200K bss, 125152K reserved, 0K cma-reserved) Apr 30 00:25:14.827325 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:25:14.827331 kernel: ftrace: allocating 37946 entries in 149 pages Apr 30 00:25:14.827337 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 00:25:14.827342 kernel: Dynamic Preempt: voluntary Apr 30 00:25:14.827348 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:25:14.827353 kernel: rcu: RCU event tracing is enabled. Apr 30 00:25:14.827359 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:25:14.827365 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:25:14.827370 kernel: Rude variant of Tasks RCU enabled. Apr 30 00:25:14.827375 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:25:14.827381 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:25:14.827387 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:25:14.827393 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 00:25:14.827398 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:25:14.827403 kernel: Console: colour VGA+ 80x25 Apr 30 00:25:14.827408 kernel: printk: console [tty0] enabled Apr 30 00:25:14.827414 kernel: printk: console [ttyS0] enabled Apr 30 00:25:14.827419 kernel: ACPI: Core revision 20230628 Apr 30 00:25:14.827424 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 00:25:14.827429 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 00:25:14.827436 kernel: x2apic enabled Apr 30 00:25:14.827441 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 00:25:14.827447 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 00:25:14.827452 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 00:25:14.827457 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Apr 30 00:25:14.827463 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 00:25:14.827468 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 00:25:14.827473 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 00:25:14.827484 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 00:25:14.827489 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 00:25:14.827495 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 00:25:14.827502 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 00:25:14.827507 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 00:25:14.827513 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 00:25:14.827518 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 00:25:14.827524 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 00:25:14.827530 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 00:25:14.827537 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 00:25:14.827542 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 00:25:14.827548 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 00:25:14.827553 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 00:25:14.827559 kernel: Freeing SMP alternatives memory: 32K Apr 30 00:25:14.827564 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:25:14.827570 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:25:14.827576 kernel: landlock: Up and running. Apr 30 00:25:14.827582 kernel: SELinux: Initializing. Apr 30 00:25:14.827588 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 00:25:14.827594 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 00:25:14.827599 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 00:25:14.827605 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:25:14.827611 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:25:14.827616 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:25:14.827622 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 00:25:14.827629 kernel: ... version: 0 Apr 30 00:25:14.827635 kernel: ... bit width: 48 Apr 30 00:25:14.827640 kernel: ... generic registers: 6 Apr 30 00:25:14.827646 kernel: ... value mask: 0000ffffffffffff Apr 30 00:25:14.827652 kernel: ... max period: 00007fffffffffff Apr 30 00:25:14.827657 kernel: ... fixed-purpose events: 0 Apr 30 00:25:14.827663 kernel: ... event mask: 000000000000003f Apr 30 00:25:14.827668 kernel: signal: max sigframe size: 1776 Apr 30 00:25:14.827674 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:25:14.827679 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:25:14.827686 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:25:14.827692 kernel: smpboot: x86: Booting SMP configuration: Apr 30 00:25:14.827697 kernel: .... node #0, CPUs: #1 Apr 30 00:25:14.827703 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:25:14.827708 kernel: smpboot: Max logical packages: 1 Apr 30 00:25:14.827714 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Apr 30 00:25:14.827719 kernel: devtmpfs: initialized Apr 30 00:25:14.827725 kernel: x86/mm: Memory block size: 128MB Apr 30 00:25:14.827731 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:25:14.827738 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:25:14.827743 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:25:14.827749 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:25:14.827754 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:25:14.827760 kernel: audit: type=2000 audit(1745972713.917:1): state=initialized audit_enabled=0 res=1 Apr 30 00:25:14.827777 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:25:14.827783 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 00:25:14.827789 kernel: cpuidle: using governor menu Apr 30 00:25:14.827795 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:25:14.827802 kernel: dca service started, version 1.12.1 Apr 30 00:25:14.827808 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 00:25:14.827813 kernel: PCI: Using configuration type 1 for base access Apr 30 00:25:14.827819 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 00:25:14.827824 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:25:14.827830 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:25:14.827835 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:25:14.827841 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:25:14.827847 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:25:14.827853 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:25:14.827859 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:25:14.827864 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:25:14.827870 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:25:14.827875 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 00:25:14.827881 kernel: ACPI: Interpreter enabled Apr 30 00:25:14.827886 kernel: ACPI: PM: (supports S0 S5) Apr 30 00:25:14.827892 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 00:25:14.827898 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 00:25:14.827904 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 00:25:14.827910 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 00:25:14.827916 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:25:14.828300 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:25:14.828385 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 00:25:14.828448 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 00:25:14.828457 kernel: PCI host bridge to bus 0000:00 Apr 30 00:25:14.828526 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 00:25:14.828582 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 00:25:14.828634 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 00:25:14.828684 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Apr 30 00:25:14.828736 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 00:25:14.828805 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 00:25:14.828859 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:25:14.828936 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 00:25:14.829026 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 30 00:25:14.829096 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Apr 30 00:25:14.829157 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Apr 30 00:25:14.829217 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Apr 30 00:25:14.829277 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Apr 30 00:25:14.829338 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 00:25:14.829410 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:25:14.829472 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Apr 30 00:25:14.829537 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 00:25:14.829598 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Apr 30 00:25:14.829663 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 00:25:14.829725 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Apr 30 00:25:14.829809 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 00:25:14.829873 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Apr 30 00:25:14.829992 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 00:25:14.830205 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Apr 30 00:25:14.830343 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 00:25:14.830414 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Apr 30 00:25:14.830488 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 00:25:14.830551 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Apr 30 00:25:14.830622 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 00:25:14.830685 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Apr 30 00:25:14.830752 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:25:14.830835 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Apr 30 00:25:14.830956 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 00:25:14.831069 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 00:25:14.831144 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 00:25:14.831207 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Apr 30 00:25:14.831266 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Apr 30 00:25:14.831333 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 00:25:14.831402 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 00:25:14.831473 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:25:14.831536 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Apr 30 00:25:14.831599 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 00:25:14.831660 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Apr 30 00:25:14.831720 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 00:25:14.831793 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 00:25:14.831858 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 00:25:14.831927 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 00:25:14.831989 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Apr 30 00:25:14.832090 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 00:25:14.832154 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 00:25:14.832213 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 00:25:14.832282 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 00:25:14.832350 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Apr 30 00:25:14.832412 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Apr 30 00:25:14.832474 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 00:25:14.832532 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 00:25:14.832591 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 00:25:14.832658 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 00:25:14.832720 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 00:25:14.832798 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 00:25:14.832860 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 00:25:14.832919 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 00:25:14.832993 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 00:25:14.833519 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Apr 30 00:25:14.833590 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Apr 30 00:25:14.833650 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 00:25:14.833713 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 00:25:14.833788 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 00:25:14.833859 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 00:25:14.833923 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Apr 30 00:25:14.833987 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Apr 30 00:25:14.834065 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 00:25:14.834137 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 00:25:14.834241 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 00:25:14.834252 kernel: acpiphp: Slot [0] registered Apr 30 00:25:14.834325 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:25:14.834389 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Apr 30 00:25:14.834450 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Apr 30 00:25:14.834510 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Apr 30 00:25:14.834569 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 00:25:14.834628 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 00:25:14.834692 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 00:25:14.834700 kernel: acpiphp: Slot [0-2] registered Apr 30 00:25:14.834757 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 00:25:14.837033 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 00:25:14.837170 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 00:25:14.837186 kernel: acpiphp: Slot [0-3] registered Apr 30 00:25:14.837303 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 00:25:14.837422 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 00:25:14.837548 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 00:25:14.837566 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 00:25:14.837576 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 00:25:14.837582 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 00:25:14.837587 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 00:25:14.837593 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 00:25:14.837599 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 00:25:14.837604 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 00:25:14.837610 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 00:25:14.837619 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 00:25:14.837625 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 00:25:14.837630 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 00:25:14.837636 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 00:25:14.837642 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 00:25:14.837648 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 00:25:14.837653 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 00:25:14.837659 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 00:25:14.837665 kernel: iommu: Default domain type: Translated Apr 30 00:25:14.837674 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 00:25:14.837679 kernel: PCI: Using ACPI for IRQ routing Apr 30 00:25:14.837685 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 00:25:14.837691 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 00:25:14.837697 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Apr 30 00:25:14.837781 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 00:25:14.837848 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 00:25:14.837924 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 00:25:14.837935 kernel: vgaarb: loaded Apr 30 00:25:14.837944 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 00:25:14.837950 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 00:25:14.837956 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 00:25:14.837962 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:25:14.837974 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:25:14.837985 kernel: pnp: PnP ACPI init Apr 30 00:25:14.838910 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 00:25:14.838925 kernel: pnp: PnP ACPI: found 5 devices Apr 30 00:25:14.838935 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 00:25:14.838941 kernel: NET: Registered PF_INET protocol family Apr 30 00:25:14.838947 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:25:14.838953 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 00:25:14.838959 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:25:14.838965 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 00:25:14.838971 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 00:25:14.838976 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 00:25:14.838982 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 00:25:14.838989 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 00:25:14.838995 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:25:14.839001 kernel: NET: Registered PF_XDP protocol family Apr 30 00:25:14.839227 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 00:25:14.839312 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 00:25:14.839417 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 00:25:14.839482 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 00:25:14.839549 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 00:25:14.839610 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 00:25:14.839670 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 00:25:14.839728 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 00:25:14.839804 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 00:25:14.839865 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 00:25:14.839926 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 00:25:14.842051 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 00:25:14.842131 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 00:25:14.842201 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 00:25:14.842261 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 00:25:14.842322 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 00:25:14.842383 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 00:25:14.842443 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 00:25:14.842502 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 00:25:14.842567 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 00:25:14.842639 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 00:25:14.842702 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 00:25:14.842762 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 00:25:14.842839 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 00:25:14.842899 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 00:25:14.842959 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 30 00:25:14.843066 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 00:25:14.843156 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 00:25:14.843219 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 00:25:14.843279 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 30 00:25:14.843344 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 00:25:14.843404 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 00:25:14.843464 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 00:25:14.843526 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 30 00:25:14.843584 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 00:25:14.843646 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 00:25:14.843706 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 00:25:14.843760 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 00:25:14.843829 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 00:25:14.843882 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Apr 30 00:25:14.843939 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 00:25:14.843991 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 00:25:14.845108 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 00:25:14.845172 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 00:25:14.845233 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 00:25:14.845289 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 00:25:14.845355 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 00:25:14.845410 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 00:25:14.845475 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 00:25:14.845530 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 00:25:14.845590 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 00:25:14.845646 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 00:25:14.845712 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 00:25:14.845783 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 00:25:14.845847 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 30 00:25:14.845902 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 00:25:14.845957 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 00:25:14.847106 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 30 00:25:14.847182 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Apr 30 00:25:14.847247 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 00:25:14.847310 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 30 00:25:14.847367 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 00:25:14.847423 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 00:25:14.847432 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 00:25:14.847438 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:25:14.847444 kernel: Initialise system trusted keyrings Apr 30 00:25:14.847453 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 00:25:14.847460 kernel: Key type asymmetric registered Apr 30 00:25:14.847466 kernel: Asymmetric key parser 'x509' registered Apr 30 00:25:14.847472 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 00:25:14.847478 kernel: io scheduler mq-deadline registered Apr 30 00:25:14.847484 kernel: io scheduler kyber registered Apr 30 00:25:14.847490 kernel: io scheduler bfq registered Apr 30 00:25:14.847553 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 00:25:14.847615 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 00:25:14.847681 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 00:25:14.847743 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 00:25:14.847823 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 00:25:14.847886 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 00:25:14.847948 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 00:25:14.849035 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 00:25:14.849113 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 00:25:14.849175 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 00:25:14.849234 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 00:25:14.849298 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 00:25:14.849357 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 00:25:14.849416 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 00:25:14.849475 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 00:25:14.849533 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 00:25:14.849542 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 00:25:14.849599 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 30 00:25:14.849658 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 30 00:25:14.849669 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 00:25:14.849676 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 30 00:25:14.849684 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:25:14.849690 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 00:25:14.849696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 00:25:14.849702 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 00:25:14.849708 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 00:25:14.849785 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 00:25:14.849800 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 00:25:14.849857 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 00:25:14.849912 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T00:25:14 UTC (1745972714) Apr 30 00:25:14.849967 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 00:25:14.849976 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 00:25:14.849982 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:25:14.849989 kernel: Segment Routing with IPv6 Apr 30 00:25:14.849995 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:25:14.850001 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:25:14.851032 kernel: Key type dns_resolver registered Apr 30 00:25:14.851041 kernel: IPI shorthand broadcast: enabled Apr 30 00:25:14.851048 kernel: sched_clock: Marking stable (1022088779, 147997550)->(1176416169, -6329840) Apr 30 00:25:14.851054 kernel: registered taskstats version 1 Apr 30 00:25:14.851060 kernel: Loading compiled-in X.509 certificates Apr 30 00:25:14.851066 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: eb8928891d93dabd1aa89590482110d196038597' Apr 30 00:25:14.851072 kernel: Key type .fscrypt registered Apr 30 00:25:14.851078 kernel: Key type fscrypt-provisioning registered Apr 30 00:25:14.851084 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:25:14.851092 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:25:14.851098 kernel: ima: No architecture policies found Apr 30 00:25:14.851104 kernel: clk: Disabling unused clocks Apr 30 00:25:14.851110 kernel: Freeing unused kernel image (initmem) memory: 42992K Apr 30 00:25:14.851116 kernel: Write protecting the kernel read-only data: 36864k Apr 30 00:25:14.851122 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Apr 30 00:25:14.851128 kernel: Run /init as init process Apr 30 00:25:14.851133 kernel: with arguments: Apr 30 00:25:14.851141 kernel: /init Apr 30 00:25:14.851147 kernel: with environment: Apr 30 00:25:14.851153 kernel: HOME=/ Apr 30 00:25:14.851159 kernel: TERM=linux Apr 30 00:25:14.851165 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:25:14.851173 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:25:14.851182 systemd[1]: Detected virtualization kvm. Apr 30 00:25:14.851188 systemd[1]: Detected architecture x86-64. Apr 30 00:25:14.851196 systemd[1]: Running in initrd. Apr 30 00:25:14.851203 systemd[1]: No hostname configured, using default hostname. Apr 30 00:25:14.851209 systemd[1]: Hostname set to . Apr 30 00:25:14.851216 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:25:14.851222 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:25:14.851228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:25:14.851235 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:25:14.851242 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:25:14.851250 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:25:14.851257 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:25:14.851263 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:25:14.851271 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:25:14.851278 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:25:14.851284 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:25:14.851290 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:25:14.851298 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:25:14.851305 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:25:14.851311 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:25:14.851317 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:25:14.851324 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:25:14.851330 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:25:14.851337 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:25:14.851343 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:25:14.851350 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:25:14.851358 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:25:14.851364 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:25:14.851370 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:25:14.851377 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:25:14.851384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:25:14.851390 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:25:14.851396 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:25:14.851403 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:25:14.851410 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:25:14.851417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:25:14.851441 systemd-journald[188]: Collecting audit messages is disabled. Apr 30 00:25:14.851459 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:25:14.851468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:25:14.851474 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:25:14.851481 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:25:14.851488 systemd-journald[188]: Journal started Apr 30 00:25:14.851506 systemd-journald[188]: Runtime Journal (/run/log/journal/46fb5e6cfedd4454a74a589e098dd8b4) is 4.8M, max 38.4M, 33.6M free. Apr 30 00:25:14.839402 systemd-modules-load[189]: Inserted module 'overlay' Apr 30 00:25:14.891047 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:25:14.891067 kernel: Bridge firewalling registered Apr 30 00:25:14.891076 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:25:14.863462 systemd-modules-load[189]: Inserted module 'br_netfilter' Apr 30 00:25:14.892264 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:25:14.892862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:25:14.899159 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:25:14.901122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:25:14.904133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:25:14.904930 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:25:14.909172 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:25:14.913036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:25:14.919211 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:25:14.920496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:25:14.921185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:25:14.926121 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:25:14.928887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:25:14.934345 dracut-cmdline[221]: dracut-dracut-053 Apr 30 00:25:14.936445 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:25:14.949513 systemd-resolved[223]: Positive Trust Anchors: Apr 30 00:25:14.949524 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:25:14.949549 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:25:14.952522 systemd-resolved[223]: Defaulting to hostname 'linux'. Apr 30 00:25:14.958819 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:25:14.959517 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:25:14.983030 kernel: SCSI subsystem initialized Apr 30 00:25:14.992037 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:25:15.000041 kernel: iscsi: registered transport (tcp) Apr 30 00:25:15.016480 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:25:15.016515 kernel: QLogic iSCSI HBA Driver Apr 30 00:25:15.038533 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:25:15.044133 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:25:15.062128 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:25:15.062175 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:25:15.062185 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:25:15.097038 kernel: raid6: avx2x4 gen() 34690 MB/s Apr 30 00:25:15.114037 kernel: raid6: avx2x2 gen() 31291 MB/s Apr 30 00:25:15.131134 kernel: raid6: avx2x1 gen() 26052 MB/s Apr 30 00:25:15.131181 kernel: raid6: using algorithm avx2x4 gen() 34690 MB/s Apr 30 00:25:15.149216 kernel: raid6: .... xor() 4776 MB/s, rmw enabled Apr 30 00:25:15.149245 kernel: raid6: using avx2x2 recovery algorithm Apr 30 00:25:15.166037 kernel: xor: automatically using best checksumming function avx Apr 30 00:25:15.276041 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:25:15.283517 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:25:15.290111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:25:15.300944 systemd-udevd[406]: Using default interface naming scheme 'v255'. Apr 30 00:25:15.304495 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:25:15.312119 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:25:15.320819 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Apr 30 00:25:15.339565 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:25:15.345111 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:25:15.378686 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:25:15.389233 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:25:15.398180 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:25:15.399755 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:25:15.401463 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:25:15.402492 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:25:15.410107 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:25:15.417477 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:25:15.445419 kernel: scsi host0: Virtio SCSI HBA Apr 30 00:25:15.460200 kernel: libata version 3.00 loaded. Apr 30 00:25:15.460248 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 00:25:15.462029 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 00:25:15.469266 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:25:15.469364 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:25:15.495034 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:25:15.495504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:25:15.497397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:25:15.499343 kernel: ACPI: bus type USB registered Apr 30 00:25:15.499373 kernel: usbcore: registered new interface driver usbfs Apr 30 00:25:15.499388 kernel: usbcore: registered new interface driver hub Apr 30 00:25:15.498854 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:25:15.504351 kernel: usbcore: registered new device driver usb Apr 30 00:25:15.509965 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:25:15.540340 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 00:25:15.540386 kernel: AES CTR mode by8 optimization enabled Apr 30 00:25:15.555663 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 00:25:15.569446 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 00:25:15.569464 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 00:25:15.569561 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 00:25:15.569640 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:25:15.569719 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 00:25:15.569807 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 30 00:25:15.569897 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 00:25:15.569987 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:25:15.570397 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 00:25:15.570580 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 00:25:15.570699 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 00:25:15.571442 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 00:25:15.571536 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 30 00:25:15.571638 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 00:25:15.571738 kernel: hub 1-0:1.0: USB hub found Apr 30 00:25:15.571846 kernel: hub 1-0:1.0: 4 ports detected Apr 30 00:25:15.571928 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 00:25:15.572066 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:25:15.572075 kernel: GPT:17805311 != 80003071 Apr 30 00:25:15.572083 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:25:15.572090 kernel: GPT:17805311 != 80003071 Apr 30 00:25:15.572100 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:25:15.572106 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:25:15.572113 kernel: hub 2-0:1.0: USB hub found Apr 30 00:25:15.572215 kernel: hub 2-0:1.0: 4 ports detected Apr 30 00:25:15.572297 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 00:25:15.572823 kernel: scsi host1: ahci Apr 30 00:25:15.572904 kernel: scsi host2: ahci Apr 30 00:25:15.572980 kernel: scsi host3: ahci Apr 30 00:25:15.573140 kernel: scsi host4: ahci Apr 30 00:25:15.573211 kernel: scsi host5: ahci Apr 30 00:25:15.573283 kernel: scsi host6: ahci Apr 30 00:25:15.573353 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Apr 30 00:25:15.573362 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Apr 30 00:25:15.573369 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Apr 30 00:25:15.573379 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Apr 30 00:25:15.573386 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Apr 30 00:25:15.573393 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Apr 30 00:25:15.604923 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 00:25:15.641079 kernel: BTRFS: device fsid 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (457) Apr 30 00:25:15.643556 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (463) Apr 30 00:25:15.644203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:25:15.657432 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 00:25:15.661314 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 00:25:15.662204 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 00:25:15.668047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:25:15.678196 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:25:15.681739 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:25:15.689035 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:25:15.690356 disk-uuid[552]: Primary Header is updated. Apr 30 00:25:15.690356 disk-uuid[552]: Secondary Entries is updated. Apr 30 00:25:15.690356 disk-uuid[552]: Secondary Header is updated. Apr 30 00:25:15.704699 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:25:15.797051 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 00:25:15.884029 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 00:25:15.884096 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 00:25:15.884109 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 00:25:15.884119 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 00:25:15.885057 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 00:25:15.888551 kernel: ata1.00: applying bridge limits Apr 30 00:25:15.888600 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 00:25:15.888613 kernel: ata1.00: configured for UDMA/100 Apr 30 00:25:15.891169 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 00:25:15.892073 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 00:25:15.933049 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:25:15.936176 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 00:25:15.945710 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:25:15.945726 kernel: usbcore: registered new interface driver usbhid Apr 30 00:25:15.945736 kernel: usbhid: USB HID core driver Apr 30 00:25:15.945754 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Apr 30 00:25:15.945764 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 00:25:15.945924 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 30 00:25:16.702101 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:25:16.703067 disk-uuid[554]: The operation has completed successfully. Apr 30 00:25:16.752166 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:25:16.752261 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:25:16.761140 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:25:16.764144 sh[592]: Success Apr 30 00:25:16.775032 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 00:25:16.815901 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:25:16.823832 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:25:16.825001 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:25:16.837166 kernel: BTRFS info (device dm-0): first mount of filesystem 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f Apr 30 00:25:16.837200 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:25:16.839870 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:25:16.839889 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:25:16.842599 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:25:16.851049 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:25:16.852984 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:25:16.854083 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:25:16.858230 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:25:16.861148 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:25:16.874269 kernel: BTRFS info (device sda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:25:16.874301 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:25:16.876367 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:25:16.882286 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:25:16.882313 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:25:16.891244 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:25:16.893439 kernel: BTRFS info (device sda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:25:16.898321 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:25:16.905790 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:25:16.938166 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:25:16.946868 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:25:16.973685 ignition[723]: Ignition 2.20.0 Apr 30 00:25:16.974461 ignition[723]: Stage: fetch-offline Apr 30 00:25:16.974506 ignition[723]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:25:16.974516 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:25:16.978694 systemd-networkd[774]: lo: Link UP Apr 30 00:25:16.974604 ignition[723]: parsed url from cmdline: "" Apr 30 00:25:16.978699 systemd-networkd[774]: lo: Gained carrier Apr 30 00:25:16.974607 ignition[723]: no config URL provided Apr 30 00:25:16.978855 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:25:16.974611 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:25:16.980913 systemd-networkd[774]: Enumeration completed Apr 30 00:25:16.974617 ignition[723]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:25:16.981130 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:25:16.974621 ignition[723]: failed to fetch config: resource requires networking Apr 30 00:25:16.981485 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:25:16.975273 ignition[723]: Ignition finished successfully Apr 30 00:25:16.981489 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:25:16.982702 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:25:16.982705 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:25:16.983315 systemd-networkd[774]: eth0: Link UP Apr 30 00:25:16.983318 systemd-networkd[774]: eth0: Gained carrier Apr 30 00:25:16.983324 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:25:16.984182 systemd[1]: Reached target network.target - Network. Apr 30 00:25:16.988250 systemd-networkd[774]: eth1: Link UP Apr 30 00:25:16.988255 systemd-networkd[774]: eth1: Gained carrier Apr 30 00:25:16.988262 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:25:16.993135 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:25:17.004210 ignition[783]: Ignition 2.20.0 Apr 30 00:25:17.004220 ignition[783]: Stage: fetch Apr 30 00:25:17.004389 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:25:17.004398 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:25:17.004495 ignition[783]: parsed url from cmdline: "" Apr 30 00:25:17.004498 ignition[783]: no config URL provided Apr 30 00:25:17.004502 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:25:17.004508 ignition[783]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:25:17.004529 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 00:25:17.004674 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 00:25:17.014070 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:25:17.044078 systemd-networkd[774]: eth0: DHCPv4 address 37.27.9.63/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:25:17.205463 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 00:25:17.209659 ignition[783]: GET result: OK Apr 30 00:25:17.209724 ignition[783]: parsing config with SHA512: 62e67b2b9683c4facec484d92c1b5eeeca18c59054580cd0d14d07e7715fbd06487566ed1d6fafa2e91c9eba1cf866d2c56e8ecb7de515f51290dce8ed1a7486 Apr 30 00:25:17.213632 unknown[783]: fetched base config from "system" Apr 30 00:25:17.213643 unknown[783]: fetched base config from "system" Apr 30 00:25:17.214048 ignition[783]: fetch: fetch complete Apr 30 00:25:17.213647 unknown[783]: fetched user config from "hetzner" Apr 30 00:25:17.214057 ignition[783]: fetch: fetch passed Apr 30 00:25:17.215640 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:25:17.214116 ignition[783]: Ignition finished successfully Apr 30 00:25:17.229200 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:25:17.241823 ignition[790]: Ignition 2.20.0 Apr 30 00:25:17.242537 ignition[790]: Stage: kargs Apr 30 00:25:17.242708 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:25:17.242718 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:25:17.243668 ignition[790]: kargs: kargs passed Apr 30 00:25:17.246306 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:25:17.243704 ignition[790]: Ignition finished successfully Apr 30 00:25:17.263302 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:25:17.273048 ignition[797]: Ignition 2.20.0 Apr 30 00:25:17.273060 ignition[797]: Stage: disks Apr 30 00:25:17.275822 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:25:17.274200 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:25:17.279188 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:25:17.274212 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:25:17.279810 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:25:17.274997 ignition[797]: disks: disks passed Apr 30 00:25:17.280973 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:25:17.275057 ignition[797]: Ignition finished successfully Apr 30 00:25:17.282074 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:25:17.283382 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:25:17.293116 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:25:17.307841 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 00:25:17.310403 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:25:17.315285 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:25:17.382035 kernel: EXT4-fs (sda9): mounted filesystem 21480c83-ef05-4682-ad3b-f751980943a0 r/w with ordered data mode. Quota mode: none. Apr 30 00:25:17.382384 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:25:17.383284 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:25:17.389084 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:25:17.391057 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:25:17.393178 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:25:17.394171 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:25:17.394195 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:25:17.399458 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:25:17.402173 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (814) Apr 30 00:25:17.405132 kernel: BTRFS info (device sda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:25:17.405161 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:25:17.407444 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:25:17.407140 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:25:17.415435 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:25:17.415468 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:25:17.418592 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:25:17.448686 coreos-metadata[816]: Apr 30 00:25:17.448 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 00:25:17.449997 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:25:17.451545 coreos-metadata[816]: Apr 30 00:25:17.450 INFO Fetch successful Apr 30 00:25:17.451545 coreos-metadata[816]: Apr 30 00:25:17.450 INFO wrote hostname ci-4152-2-3-b-856bdfce49 to /sysroot/etc/hostname Apr 30 00:25:17.453668 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:25:17.454204 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:25:17.458473 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:25:17.461795 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:25:17.532675 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:25:17.539086 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:25:17.542562 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:25:17.547034 kernel: BTRFS info (device sda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:25:17.563696 ignition[930]: INFO : Ignition 2.20.0 Apr 30 00:25:17.564505 ignition[930]: INFO : Stage: mount Apr 30 00:25:17.564505 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:25:17.564505 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:25:17.567421 ignition[930]: INFO : mount: mount passed Apr 30 00:25:17.567421 ignition[930]: INFO : Ignition finished successfully Apr 30 00:25:17.566107 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:25:17.566993 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:25:17.574109 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:25:17.836604 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:25:17.843246 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:25:17.852046 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Apr 30 00:25:17.855155 kernel: BTRFS info (device sda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:25:17.855192 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:25:17.857719 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:25:17.862563 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:25:17.862597 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:25:17.864929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:25:17.887100 ignition[959]: INFO : Ignition 2.20.0 Apr 30 00:25:17.887100 ignition[959]: INFO : Stage: files Apr 30 00:25:17.888583 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:25:17.888583 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:25:17.888583 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:25:17.890926 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:25:17.890926 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:25:17.893452 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:25:17.894305 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:25:17.894305 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:25:17.894051 unknown[959]: wrote ssh authorized keys file for user: core Apr 30 00:25:17.896765 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:25:17.896765 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 00:25:18.061835 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:25:18.097295 systemd-networkd[774]: eth0: Gained IPv6LL Apr 30 00:25:18.098387 systemd-networkd[774]: eth1: Gained IPv6LL Apr 30 00:25:19.389476 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:25:19.390914 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:25:19.390914 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 00:25:20.026635 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:25:20.070611 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:25:20.070611 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 00:25:20.072344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 00:25:20.647081 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:25:20.771582 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 00:25:20.771582 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 00:25:20.774738 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:25:20.774738 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:25:20.774738 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 00:25:20.774738 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 00:25:20.774738 ignition[959]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:25:20.774738 ignition[959]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:25:20.774738 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 00:25:20.774738 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:25:20.774738 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:25:20.774738 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:25:20.774738 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:25:20.774738 ignition[959]: INFO : files: files passed Apr 30 00:25:20.774738 ignition[959]: INFO : Ignition finished successfully Apr 30 00:25:20.775696 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:25:20.784128 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:25:20.788130 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:25:20.789267 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:25:20.789350 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:25:20.796743 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:25:20.796743 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:25:20.798520 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:25:20.798996 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:25:20.800192 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:25:20.804124 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:25:20.821999 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:25:20.822175 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:25:20.823571 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:25:20.825064 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:25:20.825595 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:25:20.826895 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:25:20.839510 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:25:20.844231 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:25:20.853504 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:25:20.854808 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:25:20.855525 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:25:20.856504 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:25:20.856612 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:25:20.857720 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:25:20.858375 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:25:20.859407 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:25:20.860396 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:25:20.861383 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:25:20.862414 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:25:20.863435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:25:20.864480 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:25:20.865594 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:25:20.866675 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:25:20.867650 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:25:20.867748 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:25:20.868870 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:25:20.869513 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:25:20.870415 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:25:20.870513 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:25:20.871564 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:25:20.871647 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:25:20.873161 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:25:20.873253 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:25:20.873897 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:25:20.874022 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:25:20.874843 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:25:20.874981 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:25:20.885503 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:25:20.886070 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:25:20.886276 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:25:20.888202 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:25:20.890153 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:25:20.890253 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:25:20.890803 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:25:20.890882 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:25:20.898174 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:25:20.903851 ignition[1013]: INFO : Ignition 2.20.0 Apr 30 00:25:20.903851 ignition[1013]: INFO : Stage: umount Apr 30 00:25:20.903851 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:25:20.903851 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:25:20.898244 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:25:20.908637 ignition[1013]: INFO : umount: umount passed Apr 30 00:25:20.908637 ignition[1013]: INFO : Ignition finished successfully Apr 30 00:25:20.909792 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:25:20.910169 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:25:20.912361 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:25:20.913228 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:25:20.913297 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:25:20.915229 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:25:20.915298 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:25:20.916088 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:25:20.916124 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:25:20.916935 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:25:20.916971 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:25:20.917862 systemd[1]: Stopped target network.target - Network. Apr 30 00:25:20.918694 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:25:20.918732 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:25:20.919668 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:25:20.920515 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:25:20.920557 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:25:20.921482 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:25:20.922332 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:25:20.923408 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:25:20.923437 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:25:20.924418 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:25:20.924444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:25:20.925335 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:25:20.925369 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:25:20.926348 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:25:20.926377 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:25:20.927461 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:25:20.927491 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:25:20.928470 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:25:20.929425 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:25:20.931055 systemd-networkd[774]: eth0: DHCPv6 lease lost Apr 30 00:25:20.934055 systemd-networkd[774]: eth1: DHCPv6 lease lost Apr 30 00:25:20.936391 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:25:20.936657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:25:20.937587 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:25:20.937721 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:25:20.939844 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:25:20.939881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:25:20.945118 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:25:20.945567 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:25:20.945622 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:25:20.946188 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:25:20.946224 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:25:20.947165 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:25:20.947197 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:25:20.948183 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:25:20.948214 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:25:20.949369 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:25:20.957907 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:25:20.957988 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:25:20.965573 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:25:20.965694 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:25:20.966861 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:25:20.966893 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:25:20.967734 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:25:20.967759 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:25:20.968728 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:25:20.968762 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:25:20.970174 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:25:20.970206 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:25:20.971230 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:25:20.971262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:25:20.981119 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:25:20.981704 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:25:20.981744 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:25:20.982258 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:25:20.982290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:25:20.986231 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:25:20.986329 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:25:20.987722 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:25:20.990163 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:25:20.997728 systemd[1]: Switching root. Apr 30 00:25:21.042124 systemd-journald[188]: Journal stopped Apr 30 00:25:21.800339 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Apr 30 00:25:21.800391 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:25:21.800403 kernel: SELinux: policy capability open_perms=1 Apr 30 00:25:21.800410 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:25:21.800417 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:25:21.800424 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:25:21.800432 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:25:21.800439 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:25:21.800448 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:25:21.800456 kernel: audit: type=1403 audit(1745972721.175:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:25:21.800464 systemd[1]: Successfully loaded SELinux policy in 36.720ms. Apr 30 00:25:21.800480 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.961ms. Apr 30 00:25:21.800489 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:25:21.800497 systemd[1]: Detected virtualization kvm. Apr 30 00:25:21.800505 systemd[1]: Detected architecture x86-64. Apr 30 00:25:21.800513 systemd[1]: Detected first boot. Apr 30 00:25:21.800523 systemd[1]: Hostname set to . Apr 30 00:25:21.800531 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:25:21.800539 zram_generator::config[1056]: No configuration found. Apr 30 00:25:21.800548 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:25:21.800556 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:25:21.800564 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:25:21.800572 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:25:21.800580 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:25:21.800590 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:25:21.800597 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:25:21.800607 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:25:21.800615 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:25:21.800624 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:25:21.800632 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:25:21.800639 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:25:21.800647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:25:21.800656 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:25:21.800665 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:25:21.800673 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:25:21.800682 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:25:21.800690 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:25:21.800697 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:25:21.800706 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:25:21.800716 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:25:21.800724 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:25:21.800733 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:25:21.800741 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:25:21.800750 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:25:21.800760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:25:21.800768 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:25:21.800776 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:25:21.800797 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:25:21.800807 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:25:21.800815 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:25:21.800823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:25:21.800831 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:25:21.800839 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:25:21.800848 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:25:21.800856 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:25:21.800865 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:25:21.800879 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:21.800889 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:25:21.800897 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:25:21.800905 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:25:21.800914 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:25:21.800922 systemd[1]: Reached target machines.target - Containers. Apr 30 00:25:21.800930 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:25:21.800939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:25:21.800947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:25:21.800956 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:25:21.800964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:25:21.800975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:25:21.800984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:25:21.800992 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:25:21.801001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:25:21.802272 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:25:21.802289 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:25:21.802299 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:25:21.802308 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:25:21.802316 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:25:21.802324 kernel: loop: module loaded Apr 30 00:25:21.802333 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:25:21.802341 kernel: fuse: init (API version 7.39) Apr 30 00:25:21.802348 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:25:21.802361 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:25:21.802388 systemd-journald[1146]: Collecting audit messages is disabled. Apr 30 00:25:21.802420 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:25:21.802437 systemd-journald[1146]: Journal started Apr 30 00:25:21.802809 systemd-journald[1146]: Runtime Journal (/run/log/journal/46fb5e6cfedd4454a74a589e098dd8b4) is 4.8M, max 38.4M, 33.6M free. Apr 30 00:25:21.583864 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:25:21.597523 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 00:25:21.597864 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:25:21.806583 kernel: ACPI: bus type drm_connector registered Apr 30 00:25:21.806610 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:25:21.809267 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:25:21.809295 systemd[1]: Stopped verity-setup.service. Apr 30 00:25:21.817054 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:21.819037 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:25:21.819776 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:25:21.820417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:25:21.821004 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:25:21.821554 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:25:21.822147 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:25:21.822755 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:25:21.823415 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:25:21.824120 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:25:21.824866 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:25:21.825126 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:25:21.825851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:25:21.826005 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:25:21.826707 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:25:21.826812 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:25:21.827647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:25:21.827751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:25:21.828490 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:25:21.828629 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:25:21.829405 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:25:21.829493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:25:21.830211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:25:21.830907 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:25:21.831742 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:25:21.838949 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:25:21.844610 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:25:21.848983 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:25:21.849562 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:25:21.849637 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:25:21.850910 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:25:21.865397 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:25:21.869156 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:25:21.869907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:25:21.871248 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:25:21.874114 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:25:21.874657 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:25:21.878035 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:25:21.878648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:25:21.883090 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:25:21.885141 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:25:21.888174 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:25:21.891378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:25:21.892661 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:25:21.894229 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:25:21.901166 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:25:21.902246 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:25:21.903148 systemd-journald[1146]: Time spent on flushing to /var/log/journal/46fb5e6cfedd4454a74a589e098dd8b4 is 32.005ms for 1137 entries. Apr 30 00:25:21.903148 systemd-journald[1146]: System Journal (/var/log/journal/46fb5e6cfedd4454a74a589e098dd8b4) is 8.0M, max 584.8M, 576.8M free. Apr 30 00:25:21.965093 systemd-journald[1146]: Received client request to flush runtime journal. Apr 30 00:25:21.965125 kernel: loop0: detected capacity change from 0 to 205544 Apr 30 00:25:21.908362 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:25:21.917580 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:25:21.921194 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:25:21.959401 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:25:21.967499 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:25:21.976543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:25:21.980710 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:25:21.983115 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:25:21.991032 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:25:21.992825 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:25:22.000703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:25:22.013793 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 30 00:25:22.014135 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 30 00:25:22.016490 kernel: loop1: detected capacity change from 0 to 140992 Apr 30 00:25:22.018301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:25:22.052327 kernel: loop2: detected capacity change from 0 to 8 Apr 30 00:25:22.073045 kernel: loop3: detected capacity change from 0 to 138184 Apr 30 00:25:22.109042 kernel: loop4: detected capacity change from 0 to 205544 Apr 30 00:25:22.126042 kernel: loop5: detected capacity change from 0 to 140992 Apr 30 00:25:22.148029 kernel: loop6: detected capacity change from 0 to 8 Apr 30 00:25:22.150030 kernel: loop7: detected capacity change from 0 to 138184 Apr 30 00:25:22.163754 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 00:25:22.164128 (sd-merge)[1201]: Merged extensions into '/usr'. Apr 30 00:25:22.167750 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:25:22.167854 systemd[1]: Reloading... Apr 30 00:25:22.242484 zram_generator::config[1224]: No configuration found. Apr 30 00:25:22.342760 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:25:22.342830 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:25:22.380436 systemd[1]: Reloading finished in 212 ms. Apr 30 00:25:22.404142 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:25:22.404997 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:25:22.415521 systemd[1]: Starting ensure-sysext.service... Apr 30 00:25:22.418128 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:25:22.425732 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:25:22.429102 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:25:22.429118 systemd[1]: Reloading... Apr 30 00:25:22.439096 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:25:22.439324 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:25:22.439944 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:25:22.440168 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 30 00:25:22.440211 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 30 00:25:22.442210 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:25:22.442219 systemd-tmpfiles[1271]: Skipping /boot Apr 30 00:25:22.448147 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:25:22.448157 systemd-tmpfiles[1271]: Skipping /boot Apr 30 00:25:22.478038 zram_generator::config[1295]: No configuration found. Apr 30 00:25:22.556685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:25:22.593857 systemd[1]: Reloading finished in 164 ms. Apr 30 00:25:22.611351 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:25:22.615122 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:25:22.618894 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:25:22.622163 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:25:22.626372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:25:22.632163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:25:22.634370 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:25:22.640079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:22.641085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:25:22.647206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:25:22.648576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:25:22.650680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:25:22.651388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:25:22.651580 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:22.655070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:22.655189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:25:22.655296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:25:22.659316 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:25:22.660049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:22.660537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:25:22.661570 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:25:22.662547 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:25:22.662652 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:25:22.669219 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:25:22.670433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:25:22.675096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:25:22.678731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:22.678975 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:25:22.685261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:25:22.688890 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Apr 30 00:25:22.689412 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:25:22.692146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:25:22.693154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:25:22.695551 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:25:22.696109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:22.697658 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:25:22.697944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:25:22.699811 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:25:22.702556 systemd[1]: Finished ensure-sysext.service. Apr 30 00:25:22.712707 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:25:22.716387 augenrules[1381]: No rules Apr 30 00:25:22.717364 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:25:22.717916 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:25:22.724439 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:25:22.725542 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:25:22.725646 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:25:22.727867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:25:22.728002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:25:22.730093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:25:22.730228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:25:22.735558 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:25:22.744114 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:25:22.746146 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:25:22.750981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:25:22.751488 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:25:22.792620 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 00:25:22.852544 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:25:22.853701 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:25:22.858608 systemd-resolved[1345]: Positive Trust Anchors: Apr 30 00:25:22.858620 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:25:22.858644 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:25:22.864248 systemd-resolved[1345]: Using system hostname 'ci-4152-2-3-b-856bdfce49'. Apr 30 00:25:22.866368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:25:22.866945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:25:22.872137 systemd-networkd[1399]: lo: Link UP Apr 30 00:25:22.872143 systemd-networkd[1399]: lo: Gained carrier Apr 30 00:25:22.874926 systemd-networkd[1399]: Enumeration completed Apr 30 00:25:22.875021 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:25:22.875967 systemd[1]: Reached target network.target - Network. Apr 30 00:25:22.876325 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:25:22.876388 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:25:22.877421 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:25:22.877469 systemd-networkd[1399]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:25:22.878300 systemd-networkd[1399]: eth0: Link UP Apr 30 00:25:22.878355 systemd-networkd[1399]: eth0: Gained carrier Apr 30 00:25:22.878406 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:25:22.882315 systemd-networkd[1399]: eth1: Link UP Apr 30 00:25:22.882608 systemd-networkd[1399]: eth1: Gained carrier Apr 30 00:25:22.882668 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:25:22.882984 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:25:22.896075 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:25:22.904861 systemd-networkd[1399]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:25:22.905562 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 30 00:25:22.908167 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1405) Apr 30 00:25:22.911039 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 30 00:25:22.917089 kernel: ACPI: button: Power Button [PWRF] Apr 30 00:25:22.939163 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:25:22.936313 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 00:25:22.937123 systemd-networkd[1399]: eth0: DHCPv4 address 37.27.9.63/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:25:22.937941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:22.938033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:25:22.940070 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 30 00:25:22.942327 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:25:22.947704 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:25:22.956170 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:25:22.957132 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:25:22.957175 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:25:22.957196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:25:22.957494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:25:22.958066 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:25:22.960690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:25:22.960892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:25:22.962539 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:25:22.963109 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:25:22.971679 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:25:22.971720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:25:23.004800 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 00:25:23.006176 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 00:25:23.006302 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 00:25:23.006387 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Apr 30 00:25:23.021326 kernel: EDAC MC: Ver: 3.0.0 Apr 30 00:25:23.021986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:25:23.030224 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:25:23.033809 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 30 00:25:23.038033 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 30 00:25:23.043047 kernel: Console: switching to colour dummy device 80x25 Apr 30 00:25:23.043092 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 00:25:23.043105 kernel: [drm] features: -context_init Apr 30 00:25:23.042225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:25:23.046027 kernel: [drm] number of scanouts: 1 Apr 30 00:25:23.046065 kernel: [drm] number of cap sets: 0 Apr 30 00:25:23.049029 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 00:25:23.053967 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 00:25:23.054027 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 00:25:23.054642 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:25:23.058034 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 00:25:23.060869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:25:23.061038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:25:23.076229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:25:23.118072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:25:23.186943 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:25:23.191258 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:25:23.201053 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:25:23.230057 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:25:23.231708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:25:23.231836 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:25:23.232030 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:25:23.232134 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:25:23.232372 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:25:23.232531 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:25:23.232605 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:25:23.232671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:25:23.232714 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:25:23.232800 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:25:23.234476 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:25:23.236874 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:25:23.240351 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:25:23.241664 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:25:23.242407 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:25:23.242530 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:25:23.242586 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:25:23.243096 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:25:23.243117 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:25:23.245109 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:25:23.249423 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:25:23.254120 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:25:23.258148 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:25:23.263100 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:25:23.265893 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:25:23.266389 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:25:23.275127 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:25:23.285134 coreos-metadata[1462]: Apr 30 00:25:23.284 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 00:25:23.283165 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:25:23.293047 coreos-metadata[1462]: Apr 30 00:25:23.285 INFO Fetch successful Apr 30 00:25:23.293047 coreos-metadata[1462]: Apr 30 00:25:23.285 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 00:25:23.293047 coreos-metadata[1462]: Apr 30 00:25:23.285 INFO Fetch successful Apr 30 00:25:23.286144 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 00:25:23.289188 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:25:23.294506 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:25:23.296282 dbus-daemon[1463]: [system] SELinux support is enabled Apr 30 00:25:23.297870 jq[1464]: false Apr 30 00:25:23.301827 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:25:23.309109 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:25:23.309499 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:25:23.310156 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:25:23.314169 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:25:23.314920 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:25:23.327019 extend-filesystems[1467]: Found loop4 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found loop5 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found loop6 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found loop7 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found sda Apr 30 00:25:23.327019 extend-filesystems[1467]: Found sda1 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found sda2 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found sda3 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found usr Apr 30 00:25:23.327019 extend-filesystems[1467]: Found sda4 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found sda6 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found sda7 Apr 30 00:25:23.327019 extend-filesystems[1467]: Found sda9 Apr 30 00:25:23.327019 extend-filesystems[1467]: Checking size of /dev/sda9 Apr 30 00:25:23.405147 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 00:25:23.405185 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1404) Apr 30 00:25:23.320609 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:25:23.405309 extend-filesystems[1467]: Resized partition /dev/sda9 Apr 30 00:25:23.333307 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:25:23.405744 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:25:23.333438 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:25:23.333655 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:25:23.417599 update_engine[1478]: I20250430 00:25:23.413156 1478 main.cc:92] Flatcar Update Engine starting Apr 30 00:25:23.333769 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:25:23.417852 jq[1481]: true Apr 30 00:25:23.360620 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:25:23.360738 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:25:23.424288 update_engine[1478]: I20250430 00:25:23.422181 1478 update_check_scheduler.cc:74] Next update check in 11m43s Apr 30 00:25:23.388504 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:25:23.424418 jq[1497]: true Apr 30 00:25:23.388530 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:25:23.442279 tar[1496]: linux-amd64/helm Apr 30 00:25:23.391345 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:25:23.391360 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:25:23.425751 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:25:23.426229 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:25:23.461146 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:25:23.492825 systemd-logind[1474]: New seat seat0. Apr 30 00:25:23.493448 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:25:23.498477 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:25:23.520221 systemd-logind[1474]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 00:25:23.520240 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 00:25:23.521253 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:25:23.529291 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:25:23.538649 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 00:25:23.564741 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:25:23.567965 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:25:23.570740 extend-filesystems[1493]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 00:25:23.570740 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 00:25:23.570740 extend-filesystems[1493]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 00:25:23.571845 extend-filesystems[1467]: Resized filesystem in /dev/sda9 Apr 30 00:25:23.571845 extend-filesystems[1467]: Found sr0 Apr 30 00:25:23.573043 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:25:23.573160 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:25:23.591219 systemd[1]: Starting sshkeys.service... Apr 30 00:25:23.608138 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:25:23.615258 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:25:23.651205 coreos-metadata[1547]: Apr 30 00:25:23.649 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 00:25:23.652738 coreos-metadata[1547]: Apr 30 00:25:23.652 INFO Fetch successful Apr 30 00:25:23.654167 unknown[1547]: wrote ssh authorized keys file for user: core Apr 30 00:25:23.678319 containerd[1498]: time="2025-04-30T00:25:23.678264269Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 00:25:23.679144 update-ssh-keys[1552]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:25:23.679838 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:25:23.685239 systemd[1]: Finished sshkeys.service. Apr 30 00:25:23.726637 containerd[1498]: time="2025-04-30T00:25:23.726393096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729238161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729262927Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729276393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729396959Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729411576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729459776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729469825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729593076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729605039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729615188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730032 containerd[1498]: time="2025-04-30T00:25:23.729622432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730224 containerd[1498]: time="2025-04-30T00:25:23.729676172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730224 containerd[1498]: time="2025-04-30T00:25:23.729841562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730224 containerd[1498]: time="2025-04-30T00:25:23.729918246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:25:23.730224 containerd[1498]: time="2025-04-30T00:25:23.729932072Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:25:23.730224 containerd[1498]: time="2025-04-30T00:25:23.729991934Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:25:23.731091 containerd[1498]: time="2025-04-30T00:25:23.731075446Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:25:23.734771 containerd[1498]: time="2025-04-30T00:25:23.734755006Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:25:23.734883 containerd[1498]: time="2025-04-30T00:25:23.734869871Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:25:23.734962 containerd[1498]: time="2025-04-30T00:25:23.734951113Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:25:23.735120 containerd[1498]: time="2025-04-30T00:25:23.735106534Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:25:23.735181 containerd[1498]: time="2025-04-30T00:25:23.735170244Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737116013Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737299006Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737369207Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737381270Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737395938Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737407540Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737417037Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737426656Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737436795Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737446613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737455920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737464747Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737472290Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:25:23.737808 containerd[1498]: time="2025-04-30T00:25:23.737499753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737510022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737519399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737529208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737538204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737547872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737563301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737580344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737598607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737616681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737625418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737634846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737644373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737654563Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737670112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738065 containerd[1498]: time="2025-04-30T00:25:23.737679028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738264 containerd[1498]: time="2025-04-30T00:25:23.737686603Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:25:23.738914 containerd[1498]: time="2025-04-30T00:25:23.738612348Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:25:23.738914 containerd[1498]: time="2025-04-30T00:25:23.738634660Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:25:23.738914 containerd[1498]: time="2025-04-30T00:25:23.738643397Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:25:23.738914 containerd[1498]: time="2025-04-30T00:25:23.738652684Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:25:23.738914 containerd[1498]: time="2025-04-30T00:25:23.738660499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.738914 containerd[1498]: time="2025-04-30T00:25:23.738690966Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:25:23.738914 containerd[1498]: time="2025-04-30T00:25:23.738699943Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:25:23.738914 containerd[1498]: time="2025-04-30T00:25:23.738707377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:25:23.739238 containerd[1498]: time="2025-04-30T00:25:23.739190893Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:25:23.739371 containerd[1498]: time="2025-04-30T00:25:23.739359770Z" level=info msg="Connect containerd service" Apr 30 00:25:23.739457 containerd[1498]: time="2025-04-30T00:25:23.739446182Z" level=info msg="using legacy CRI server" Apr 30 00:25:23.739518 containerd[1498]: time="2025-04-30T00:25:23.739492940Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:25:23.740203 containerd[1498]: time="2025-04-30T00:25:23.739654743Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:25:23.741197 containerd[1498]: time="2025-04-30T00:25:23.741167900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:25:23.741356 containerd[1498]: time="2025-04-30T00:25:23.741335655Z" level=info msg="Start subscribing containerd event" Apr 30 00:25:23.745479 containerd[1498]: time="2025-04-30T00:25:23.743098100Z" level=info msg="Start recovering state" Apr 30 00:25:23.745479 containerd[1498]: time="2025-04-30T00:25:23.743168312Z" level=info msg="Start event monitor" Apr 30 00:25:23.745479 containerd[1498]: time="2025-04-30T00:25:23.743179823Z" level=info msg="Start snapshots syncer" Apr 30 00:25:23.745479 containerd[1498]: time="2025-04-30T00:25:23.743187017Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:25:23.745479 containerd[1498]: time="2025-04-30T00:25:23.743192567Z" level=info msg="Start streaming server" Apr 30 00:25:23.745479 containerd[1498]: time="2025-04-30T00:25:23.741556619Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:25:23.745479 containerd[1498]: time="2025-04-30T00:25:23.743300029Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:25:23.745479 containerd[1498]: time="2025-04-30T00:25:23.744573787Z" level=info msg="containerd successfully booted in 0.068853s" Apr 30 00:25:23.743407 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:25:23.935938 tar[1496]: linux-amd64/LICENSE Apr 30 00:25:23.936085 tar[1496]: linux-amd64/README.md Apr 30 00:25:23.952240 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:25:24.305184 systemd-networkd[1399]: eth1: Gained IPv6LL Apr 30 00:25:24.305682 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 30 00:25:24.321990 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:25:24.325810 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:25:24.337259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:25:24.339549 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:25:24.366416 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:25:24.393579 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:25:24.409662 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:25:24.418063 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:25:24.422780 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:25:24.422945 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:25:24.428630 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:25:24.438921 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:25:24.449482 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:25:24.452628 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:25:24.453809 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:25:24.753173 systemd-networkd[1399]: eth0: Gained IPv6LL Apr 30 00:25:24.754227 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 30 00:25:25.032191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:25:25.033733 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:25:25.036522 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:25:25.038396 systemd[1]: Startup finished in 1.132s (kernel) + 6.510s (initrd) + 3.899s (userspace) = 11.542s. Apr 30 00:25:25.510120 kubelet[1594]: E0430 00:25:25.509988 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:25:25.512450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:25:25.512576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:25:35.763343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:25:35.770242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:25:35.852555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:25:35.863394 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:25:35.904777 kubelet[1613]: E0430 00:25:35.904709 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:25:35.907836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:25:35.908057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:25:46.158380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:25:46.163167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:25:46.234528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:25:46.237909 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:25:46.266920 kubelet[1629]: E0430 00:25:46.266870 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:25:46.269171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:25:46.269288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:25:55.181376 systemd-timesyncd[1382]: Contacted time server 178.215.228.24:123 (2.flatcar.pool.ntp.org). Apr 30 00:25:55.181439 systemd-timesyncd[1382]: Initial clock synchronization to Wed 2025-04-30 00:25:55.545112 UTC. Apr 30 00:25:56.520415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:25:56.525230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:25:56.615287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:25:56.618920 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:25:56.654373 kubelet[1645]: E0430 00:25:56.654332 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:25:56.656490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:25:56.656619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:26:06.831682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:26:06.842206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:26:06.922680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:26:06.926978 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:26:06.966747 kubelet[1660]: E0430 00:26:06.966669 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:26:06.968859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:26:06.968984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:26:08.519633 update_engine[1478]: I20250430 00:26:08.519546 1478 update_attempter.cc:509] Updating boot flags... Apr 30 00:26:08.549067 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1677) Apr 30 00:26:08.584075 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1677) Apr 30 00:26:08.619041 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1677) Apr 30 00:26:17.080674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 00:26:17.086446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:26:17.162706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:26:17.165741 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:26:17.195808 kubelet[1697]: E0430 00:26:17.195733 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:26:17.198879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:26:17.199032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:26:27.330603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 00:26:27.336247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:26:27.414844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:26:27.418983 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:26:27.451729 kubelet[1712]: E0430 00:26:27.451672 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:26:27.453689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:26:27.453814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:26:37.580706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 00:26:37.586224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:26:37.663194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:26:37.666117 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:26:37.697051 kubelet[1727]: E0430 00:26:37.696962 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:26:37.698806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:26:37.698989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:26:47.830575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 00:26:47.840181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:26:47.912811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:26:47.916048 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:26:47.943508 kubelet[1743]: E0430 00:26:47.943466 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:26:47.945150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:26:47.945284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:26:58.080637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 00:26:58.091199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:26:58.162517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:26:58.165852 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:26:58.195757 kubelet[1758]: E0430 00:26:58.195705 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:26:58.197344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:26:58.197497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:27:08.330706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 00:27:08.341194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:27:08.414187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:27:08.429559 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:27:08.459096 kubelet[1773]: E0430 00:27:08.459021 1773 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:27:08.460640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:27:08.460808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:27:14.598441 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:27:14.603272 systemd[1]: Started sshd@0-37.27.9.63:22-139.178.89.65:57694.service - OpenSSH per-connection server daemon (139.178.89.65:57694). Apr 30 00:27:15.582678 sshd[1781]: Accepted publickey for core from 139.178.89.65 port 57694 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:27:15.585248 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:27:15.593865 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:27:15.604396 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:27:15.607047 systemd-logind[1474]: New session 1 of user core. Apr 30 00:27:15.615216 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:27:15.621394 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:27:15.624639 (systemd)[1785]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:27:15.717247 systemd[1785]: Queued start job for default target default.target. Apr 30 00:27:15.731934 systemd[1785]: Created slice app.slice - User Application Slice. Apr 30 00:27:15.731961 systemd[1785]: Reached target paths.target - Paths. Apr 30 00:27:15.731974 systemd[1785]: Reached target timers.target - Timers. Apr 30 00:27:15.733032 systemd[1785]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:27:15.742690 systemd[1785]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:27:15.742756 systemd[1785]: Reached target sockets.target - Sockets. Apr 30 00:27:15.742772 systemd[1785]: Reached target basic.target - Basic System. Apr 30 00:27:15.742812 systemd[1785]: Reached target default.target - Main User Target. Apr 30 00:27:15.742841 systemd[1785]: Startup finished in 112ms. Apr 30 00:27:15.742945 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:27:15.744183 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:27:16.430319 systemd[1]: Started sshd@1-37.27.9.63:22-139.178.89.65:57700.service - OpenSSH per-connection server daemon (139.178.89.65:57700). Apr 30 00:27:17.400390 sshd[1796]: Accepted publickey for core from 139.178.89.65 port 57700 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:27:17.401656 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:27:17.406566 systemd-logind[1474]: New session 2 of user core. Apr 30 00:27:17.422152 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:27:18.075718 sshd[1798]: Connection closed by 139.178.89.65 port 57700 Apr 30 00:27:18.076302 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Apr 30 00:27:18.078564 systemd[1]: sshd@1-37.27.9.63:22-139.178.89.65:57700.service: Deactivated successfully. Apr 30 00:27:18.079973 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:27:18.080872 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:27:18.081922 systemd-logind[1474]: Removed session 2. Apr 30 00:27:18.245120 systemd[1]: Started sshd@2-37.27.9.63:22-139.178.89.65:42778.service - OpenSSH per-connection server daemon (139.178.89.65:42778). Apr 30 00:27:18.580508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 00:27:18.592502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:27:18.675267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:27:18.678470 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:27:18.706809 kubelet[1813]: E0430 00:27:18.706754 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:27:18.708829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:27:18.708970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:27:19.215142 sshd[1803]: Accepted publickey for core from 139.178.89.65 port 42778 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:27:19.216314 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:27:19.220091 systemd-logind[1474]: New session 3 of user core. Apr 30 00:27:19.226137 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:27:19.883748 sshd[1820]: Connection closed by 139.178.89.65 port 42778 Apr 30 00:27:19.884370 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Apr 30 00:27:19.886803 systemd[1]: sshd@2-37.27.9.63:22-139.178.89.65:42778.service: Deactivated successfully. Apr 30 00:27:19.888289 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:27:19.889318 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:27:19.890248 systemd-logind[1474]: Removed session 3. Apr 30 00:27:20.049097 systemd[1]: Started sshd@3-37.27.9.63:22-139.178.89.65:42786.service - OpenSSH per-connection server daemon (139.178.89.65:42786). Apr 30 00:27:21.019112 sshd[1825]: Accepted publickey for core from 139.178.89.65 port 42786 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:27:21.020366 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:27:21.024834 systemd-logind[1474]: New session 4 of user core. Apr 30 00:27:21.033166 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:27:21.693300 sshd[1827]: Connection closed by 139.178.89.65 port 42786 Apr 30 00:27:21.693902 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Apr 30 00:27:21.696446 systemd[1]: sshd@3-37.27.9.63:22-139.178.89.65:42786.service: Deactivated successfully. Apr 30 00:27:21.697823 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:27:21.699224 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:27:21.700436 systemd-logind[1474]: Removed session 4. Apr 30 00:27:21.860564 systemd[1]: Started sshd@4-37.27.9.63:22-139.178.89.65:42792.service - OpenSSH per-connection server daemon (139.178.89.65:42792). Apr 30 00:27:22.832407 sshd[1832]: Accepted publickey for core from 139.178.89.65 port 42792 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:27:22.833643 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:27:22.838068 systemd-logind[1474]: New session 5 of user core. Apr 30 00:27:22.847360 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:27:23.357439 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:27:23.357706 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:27:23.371724 sudo[1835]: pam_unix(sudo:session): session closed for user root Apr 30 00:27:23.528950 sshd[1834]: Connection closed by 139.178.89.65 port 42792 Apr 30 00:27:23.529715 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Apr 30 00:27:23.532686 systemd[1]: sshd@4-37.27.9.63:22-139.178.89.65:42792.service: Deactivated successfully. Apr 30 00:27:23.534388 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:27:23.535808 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:27:23.536996 systemd-logind[1474]: Removed session 5. Apr 30 00:27:23.694454 systemd[1]: Started sshd@5-37.27.9.63:22-139.178.89.65:42804.service - OpenSSH per-connection server daemon (139.178.89.65:42804). Apr 30 00:27:24.660990 sshd[1840]: Accepted publickey for core from 139.178.89.65 port 42804 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:27:24.662371 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:27:24.667096 systemd-logind[1474]: New session 6 of user core. Apr 30 00:27:24.677173 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:27:25.177394 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:27:25.177659 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:27:25.180509 sudo[1844]: pam_unix(sudo:session): session closed for user root Apr 30 00:27:25.184859 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 00:27:25.185153 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:27:25.195257 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:27:25.217803 augenrules[1866]: No rules Apr 30 00:27:25.218851 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:27:25.219028 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:27:25.220235 sudo[1843]: pam_unix(sudo:session): session closed for user root Apr 30 00:27:25.377564 sshd[1842]: Connection closed by 139.178.89.65 port 42804 Apr 30 00:27:25.378102 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Apr 30 00:27:25.380615 systemd[1]: sshd@5-37.27.9.63:22-139.178.89.65:42804.service: Deactivated successfully. Apr 30 00:27:25.382079 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:27:25.383096 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:27:25.384143 systemd-logind[1474]: Removed session 6. Apr 30 00:27:25.543929 systemd[1]: Started sshd@6-37.27.9.63:22-139.178.89.65:42810.service - OpenSSH per-connection server daemon (139.178.89.65:42810). Apr 30 00:27:26.514295 sshd[1874]: Accepted publickey for core from 139.178.89.65 port 42810 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:27:26.516481 sshd-session[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:27:26.521846 systemd-logind[1474]: New session 7 of user core. Apr 30 00:27:26.528260 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:27:27.033126 sudo[1877]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:27:27.033516 sudo[1877]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:27:27.328222 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:27:27.329279 (dockerd)[1894]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:27:27.584902 dockerd[1894]: time="2025-04-30T00:27:27.584727458Z" level=info msg="Starting up" Apr 30 00:27:27.690596 dockerd[1894]: time="2025-04-30T00:27:27.690515440Z" level=info msg="Loading containers: start." Apr 30 00:27:27.833222 kernel: Initializing XFRM netlink socket Apr 30 00:27:27.915943 systemd-networkd[1399]: docker0: Link UP Apr 30 00:27:27.938142 dockerd[1894]: time="2025-04-30T00:27:27.938090737Z" level=info msg="Loading containers: done." Apr 30 00:27:27.952653 dockerd[1894]: time="2025-04-30T00:27:27.952594385Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:27:27.952778 dockerd[1894]: time="2025-04-30T00:27:27.952716007Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 00:27:27.952883 dockerd[1894]: time="2025-04-30T00:27:27.952851696Z" level=info msg="Daemon has completed initialization" Apr 30 00:27:27.978343 dockerd[1894]: time="2025-04-30T00:27:27.978300140Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:27:27.979242 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:27:28.830511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 00:27:28.842187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:27:28.908695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:27:28.921231 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:27:28.949216 kubelet[2087]: E0430 00:27:28.949161 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:27:28.951335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:27:28.951450 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:27:29.007386 containerd[1498]: time="2025-04-30T00:27:29.007334686Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 00:27:29.568189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301539558.mount: Deactivated successfully. Apr 30 00:27:31.096101 systemd[1]: Started sshd@7-37.27.9.63:22-92.255.57.132:55620.service - OpenSSH per-connection server daemon (92.255.57.132:55620). Apr 30 00:27:31.189811 sshd[2145]: Invalid user zabbix from 92.255.57.132 port 55620 Apr 30 00:27:31.205835 sshd[2145]: Connection closed by invalid user zabbix 92.255.57.132 port 55620 [preauth] Apr 30 00:27:31.207547 systemd[1]: sshd@7-37.27.9.63:22-92.255.57.132:55620.service: Deactivated successfully. Apr 30 00:27:32.834669 containerd[1498]: time="2025-04-30T00:27:32.834620363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:32.835515 containerd[1498]: time="2025-04-30T00:27:32.835479014Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27961081" Apr 30 00:27:32.836096 containerd[1498]: time="2025-04-30T00:27:32.836061203Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:32.838340 containerd[1498]: time="2025-04-30T00:27:32.838297585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:32.839399 containerd[1498]: time="2025-04-30T00:27:32.839274163Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 3.831902073s" Apr 30 00:27:32.839399 containerd[1498]: time="2025-04-30T00:27:32.839301942Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Apr 30 00:27:32.840806 containerd[1498]: time="2025-04-30T00:27:32.840777293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 00:27:34.934210 containerd[1498]: time="2025-04-30T00:27:34.934148950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:34.935572 containerd[1498]: time="2025-04-30T00:27:34.935512792Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713798" Apr 30 00:27:34.936321 containerd[1498]: time="2025-04-30T00:27:34.936288508Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:34.939106 containerd[1498]: time="2025-04-30T00:27:34.939060151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:34.940739 containerd[1498]: time="2025-04-30T00:27:34.940326221Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.099521891s" Apr 30 00:27:34.940739 containerd[1498]: time="2025-04-30T00:27:34.940363026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Apr 30 00:27:34.940986 containerd[1498]: time="2025-04-30T00:27:34.940955569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 00:27:36.506915 containerd[1498]: time="2025-04-30T00:27:36.506852799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:36.507936 containerd[1498]: time="2025-04-30T00:27:36.507892133Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780408" Apr 30 00:27:36.508551 containerd[1498]: time="2025-04-30T00:27:36.508510048Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:36.510566 containerd[1498]: time="2025-04-30T00:27:36.510530170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:36.511641 containerd[1498]: time="2025-04-30T00:27:36.511351195Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.57036905s" Apr 30 00:27:36.511641 containerd[1498]: time="2025-04-30T00:27:36.511374317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Apr 30 00:27:36.511778 containerd[1498]: time="2025-04-30T00:27:36.511757395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 00:27:37.519414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2140533851.mount: Deactivated successfully. Apr 30 00:27:37.785508 containerd[1498]: time="2025-04-30T00:27:37.785102430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:37.786207 containerd[1498]: time="2025-04-30T00:27:37.785768175Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354653" Apr 30 00:27:37.786727 containerd[1498]: time="2025-04-30T00:27:37.786672332Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:37.788415 containerd[1498]: time="2025-04-30T00:27:37.788391428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:37.789216 containerd[1498]: time="2025-04-30T00:27:37.788699136Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.276917357s" Apr 30 00:27:37.789216 containerd[1498]: time="2025-04-30T00:27:37.788743014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 00:27:37.789216 containerd[1498]: time="2025-04-30T00:27:37.789206379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:27:38.327595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340356875.mount: Deactivated successfully. Apr 30 00:27:39.080479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 30 00:27:39.088687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:27:39.164124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:27:39.167966 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:27:39.198828 kubelet[2222]: E0430 00:27:39.198792 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:27:39.201033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:27:39.201157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:27:39.228568 containerd[1498]: time="2025-04-30T00:27:39.228525556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:39.229408 containerd[1498]: time="2025-04-30T00:27:39.229371660Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" Apr 30 00:27:39.230033 containerd[1498]: time="2025-04-30T00:27:39.229972645Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:39.232032 containerd[1498]: time="2025-04-30T00:27:39.231978333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:39.232910 containerd[1498]: time="2025-04-30T00:27:39.232806885Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.443580161s" Apr 30 00:27:39.232910 containerd[1498]: time="2025-04-30T00:27:39.232830246Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 00:27:39.233397 containerd[1498]: time="2025-04-30T00:27:39.233379940Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:27:39.741139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474217861.mount: Deactivated successfully. Apr 30 00:27:39.747728 containerd[1498]: time="2025-04-30T00:27:39.747654295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:39.748450 containerd[1498]: time="2025-04-30T00:27:39.748403043Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 30 00:27:39.749460 containerd[1498]: time="2025-04-30T00:27:39.749415605Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:39.751848 containerd[1498]: time="2025-04-30T00:27:39.751776758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:39.752709 containerd[1498]: time="2025-04-30T00:27:39.752648808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 519.242251ms" Apr 30 00:27:39.752709 containerd[1498]: time="2025-04-30T00:27:39.752701263Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 00:27:39.753569 containerd[1498]: time="2025-04-30T00:27:39.753352577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 00:27:40.254583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070123471.mount: Deactivated successfully. Apr 30 00:27:42.434160 containerd[1498]: time="2025-04-30T00:27:42.434109948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:42.434982 containerd[1498]: time="2025-04-30T00:27:42.434947478Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780083" Apr 30 00:27:42.435474 containerd[1498]: time="2025-04-30T00:27:42.435437832Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:42.437839 containerd[1498]: time="2025-04-30T00:27:42.437809258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:27:42.439643 containerd[1498]: time="2025-04-30T00:27:42.439142774Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.685761584s" Apr 30 00:27:42.439643 containerd[1498]: time="2025-04-30T00:27:42.439172707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Apr 30 00:27:44.528477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:27:44.536288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:27:44.560627 systemd[1]: Reloading requested from client PID 2313 ('systemctl') (unit session-7.scope)... Apr 30 00:27:44.560643 systemd[1]: Reloading... Apr 30 00:27:44.644035 zram_generator::config[2353]: No configuration found. Apr 30 00:27:44.731137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:27:44.790147 systemd[1]: Reloading finished in 229 ms. Apr 30 00:27:44.826650 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:27:44.826719 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:27:44.827055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:27:44.828491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:27:44.904734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:27:44.907984 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:27:44.946630 kubelet[2407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:27:44.947213 kubelet[2407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:27:44.947213 kubelet[2407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:27:44.947213 kubelet[2407]: I0430 00:27:44.947084 2407 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:27:45.132107 kubelet[2407]: I0430 00:27:45.132055 2407 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 00:27:45.132107 kubelet[2407]: I0430 00:27:45.132092 2407 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:27:45.132360 kubelet[2407]: I0430 00:27:45.132329 2407 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 00:27:45.155911 kubelet[2407]: I0430 00:27:45.155542 2407 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:27:45.160264 kubelet[2407]: E0430 00:27:45.160031 2407 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.9.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.9.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:27:45.169466 kubelet[2407]: E0430 00:27:45.169397 2407 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:27:45.169466 kubelet[2407]: I0430 00:27:45.169434 2407 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:27:45.174459 kubelet[2407]: I0430 00:27:45.174387 2407 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:27:45.177593 kubelet[2407]: I0430 00:27:45.177533 2407 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 00:27:45.177704 kubelet[2407]: I0430 00:27:45.177669 2407 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:27:45.177892 kubelet[2407]: I0430 00:27:45.177695 2407 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-b-856bdfce49","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:27:45.177892 kubelet[2407]: I0430 00:27:45.177862 2407 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:27:45.177892 kubelet[2407]: I0430 00:27:45.177870 2407 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 00:27:45.178207 kubelet[2407]: I0430 00:27:45.177968 2407 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:27:45.179652 kubelet[2407]: I0430 00:27:45.179605 2407 kubelet.go:408] "Attempting to sync node with API server" Apr 30 00:27:45.179652 kubelet[2407]: I0430 00:27:45.179629 2407 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:27:45.179652 kubelet[2407]: I0430 00:27:45.179654 2407 kubelet.go:314] "Adding apiserver pod source" Apr 30 00:27:45.179815 kubelet[2407]: I0430 00:27:45.179682 2407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:27:45.191562 kubelet[2407]: W0430 00:27:45.191037 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.9.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-b-856bdfce49&limit=500&resourceVersion=0": dial tcp 37.27.9.63:6443: connect: connection refused Apr 30 00:27:45.191562 kubelet[2407]: E0430 00:27:45.191114 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.9.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-b-856bdfce49&limit=500&resourceVersion=0\": dial tcp 37.27.9.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:27:45.192493 kubelet[2407]: I0430 00:27:45.191987 2407 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:27:45.192898 kubelet[2407]: W0430 00:27:45.192790 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.9.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.9.63:6443: connect: connection refused Apr 30 00:27:45.192898 kubelet[2407]: E0430 00:27:45.192837 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.9.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.9.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:27:45.197400 kubelet[2407]: I0430 00:27:45.197348 2407 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:27:45.198357 kubelet[2407]: W0430 00:27:45.198320 2407 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:27:45.199469 kubelet[2407]: I0430 00:27:45.199260 2407 server.go:1269] "Started kubelet" Apr 30 00:27:45.199921 kubelet[2407]: I0430 00:27:45.199859 2407 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:27:45.201227 kubelet[2407]: I0430 00:27:45.200711 2407 server.go:460] "Adding debug handlers to kubelet server" Apr 30 00:27:45.202848 kubelet[2407]: I0430 00:27:45.202533 2407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:27:45.203924 kubelet[2407]: I0430 00:27:45.203857 2407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:27:45.204187 kubelet[2407]: I0430 00:27:45.204090 2407 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:27:45.208481 kubelet[2407]: E0430 00:27:45.205463 2407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.9.63:6443/api/v1/namespaces/default/events\": dial tcp 37.27.9.63:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-3-b-856bdfce49.183af11201abc3e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-b-856bdfce49,UID:ci-4152-2-3-b-856bdfce49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-b-856bdfce49,},FirstTimestamp:2025-04-30 00:27:45.199227876 +0000 UTC m=+0.288308869,LastTimestamp:2025-04-30 00:27:45.199227876 +0000 UTC m=+0.288308869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-b-856bdfce49,}" Apr 30 00:27:45.209383 kubelet[2407]: I0430 00:27:45.209353 2407 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:27:45.210916 kubelet[2407]: E0430 00:27:45.210778 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:45.210916 kubelet[2407]: I0430 00:27:45.210811 2407 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 00:27:45.212958 kubelet[2407]: I0430 00:27:45.212926 2407 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 00:27:45.212958 kubelet[2407]: I0430 00:27:45.213000 2407 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:27:45.214288 kubelet[2407]: W0430 00:27:45.214121 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.9.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.9.63:6443: connect: connection refused Apr 30 00:27:45.214288 kubelet[2407]: E0430 00:27:45.214228 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.9.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.9.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:27:45.215499 kubelet[2407]: I0430 00:27:45.214489 2407 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:27:45.215499 kubelet[2407]: I0430 00:27:45.214578 2407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:27:45.215977 kubelet[2407]: E0430 00:27:45.215779 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.9.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-b-856bdfce49?timeout=10s\": dial tcp 37.27.9.63:6443: connect: connection refused" interval="200ms" Apr 30 00:27:45.217134 kubelet[2407]: I0430 00:27:45.217118 2407 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:27:45.220925 kubelet[2407]: E0430 00:27:45.220887 2407 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:27:45.229067 kubelet[2407]: I0430 00:27:45.229032 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:27:45.231617 kubelet[2407]: I0430 00:27:45.231333 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:27:45.231617 kubelet[2407]: I0430 00:27:45.231363 2407 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:27:45.231617 kubelet[2407]: I0430 00:27:45.231380 2407 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 00:27:45.231617 kubelet[2407]: E0430 00:27:45.231420 2407 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:27:45.241614 kubelet[2407]: W0430 00:27:45.241542 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.9.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.9.63:6443: connect: connection refused Apr 30 00:27:45.241614 kubelet[2407]: E0430 00:27:45.241577 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.9.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.9.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:27:45.248933 kubelet[2407]: I0430 00:27:45.248880 2407 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:27:45.248933 kubelet[2407]: I0430 00:27:45.248892 2407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:27:45.248933 kubelet[2407]: I0430 00:27:45.248907 2407 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:27:45.260623 kubelet[2407]: I0430 00:27:45.260596 2407 policy_none.go:49] "None policy: Start" Apr 30 00:27:45.261796 kubelet[2407]: I0430 00:27:45.261779 2407 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:27:45.262224 kubelet[2407]: I0430 00:27:45.261933 2407 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:27:45.285357 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:27:45.302494 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:27:45.304988 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:27:45.311933 kubelet[2407]: E0430 00:27:45.311887 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:45.313708 kubelet[2407]: I0430 00:27:45.313675 2407 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:27:45.313900 kubelet[2407]: I0430 00:27:45.313869 2407 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:27:45.313935 kubelet[2407]: I0430 00:27:45.313886 2407 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:27:45.314789 kubelet[2407]: I0430 00:27:45.314756 2407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:27:45.316194 kubelet[2407]: E0430 00:27:45.316175 2407 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:45.344896 systemd[1]: Created slice kubepods-burstable-pod4db39075849125ceb817e43294f12923.slice - libcontainer container kubepods-burstable-pod4db39075849125ceb817e43294f12923.slice. Apr 30 00:27:45.364496 systemd[1]: Created slice kubepods-burstable-pod2b6c0edf66843b2bbae5344a8079bd14.slice - libcontainer container kubepods-burstable-pod2b6c0edf66843b2bbae5344a8079bd14.slice. Apr 30 00:27:45.370714 systemd[1]: Created slice kubepods-burstable-podca35dd20a3efd2322fe4f597de98f20a.slice - libcontainer container kubepods-burstable-podca35dd20a3efd2322fe4f597de98f20a.slice. Apr 30 00:27:45.417071 kubelet[2407]: E0430 00:27:45.416188 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.9.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-b-856bdfce49?timeout=10s\": dial tcp 37.27.9.63:6443: connect: connection refused" interval="400ms" Apr 30 00:27:45.417071 kubelet[2407]: I0430 00:27:45.416323 2407 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.417071 kubelet[2407]: E0430 00:27:45.416725 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.9.63:6443/api/v1/nodes\": dial tcp 37.27.9.63:6443: connect: connection refused" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.514606 kubelet[2407]: I0430 00:27:45.514363 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4db39075849125ceb817e43294f12923-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-b-856bdfce49\" (UID: \"4db39075849125ceb817e43294f12923\") " pod="kube-system/kube-apiserver-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.514606 kubelet[2407]: I0430 00:27:45.514412 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4db39075849125ceb817e43294f12923-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-b-856bdfce49\" (UID: \"4db39075849125ceb817e43294f12923\") " pod="kube-system/kube-apiserver-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.514606 kubelet[2407]: I0430 00:27:45.514435 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.514606 kubelet[2407]: I0430 00:27:45.514452 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.514606 kubelet[2407]: I0430 00:27:45.514473 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4db39075849125ceb817e43294f12923-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-b-856bdfce49\" (UID: \"4db39075849125ceb817e43294f12923\") " pod="kube-system/kube-apiserver-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.514928 kubelet[2407]: I0430 00:27:45.514489 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.514928 kubelet[2407]: I0430 00:27:45.514504 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.514928 kubelet[2407]: I0430 00:27:45.514521 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.514928 kubelet[2407]: I0430 00:27:45.514563 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca35dd20a3efd2322fe4f597de98f20a-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-b-856bdfce49\" (UID: \"ca35dd20a3efd2322fe4f597de98f20a\") " pod="kube-system/kube-scheduler-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.618688 kubelet[2407]: I0430 00:27:45.618648 2407 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.619021 kubelet[2407]: E0430 00:27:45.618974 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.9.63:6443/api/v1/nodes\": dial tcp 37.27.9.63:6443: connect: connection refused" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:45.661894 containerd[1498]: time="2025-04-30T00:27:45.661832400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-b-856bdfce49,Uid:4db39075849125ceb817e43294f12923,Namespace:kube-system,Attempt:0,}" Apr 30 00:27:45.667547 containerd[1498]: time="2025-04-30T00:27:45.667420041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-b-856bdfce49,Uid:2b6c0edf66843b2bbae5344a8079bd14,Namespace:kube-system,Attempt:0,}" Apr 30 00:27:45.672965 containerd[1498]: time="2025-04-30T00:27:45.672924649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-b-856bdfce49,Uid:ca35dd20a3efd2322fe4f597de98f20a,Namespace:kube-system,Attempt:0,}" Apr 30 00:27:45.817137 kubelet[2407]: E0430 00:27:45.817074 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.9.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-b-856bdfce49?timeout=10s\": dial tcp 37.27.9.63:6443: connect: connection refused" interval="800ms" Apr 30 00:27:46.020965 kubelet[2407]: I0430 00:27:46.020841 2407 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:46.021443 kubelet[2407]: E0430 00:27:46.021176 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.9.63:6443/api/v1/nodes\": dial tcp 37.27.9.63:6443: connect: connection refused" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:46.132209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158622706.mount: Deactivated successfully. Apr 30 00:27:46.139148 containerd[1498]: time="2025-04-30T00:27:46.139081029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:27:46.140585 containerd[1498]: time="2025-04-30T00:27:46.140549213Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:27:46.142086 containerd[1498]: time="2025-04-30T00:27:46.142048322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 30 00:27:46.142720 containerd[1498]: time="2025-04-30T00:27:46.142686634Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:27:46.144597 containerd[1498]: time="2025-04-30T00:27:46.144549275Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:27:46.145929 containerd[1498]: time="2025-04-30T00:27:46.145790354Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:27:46.145929 containerd[1498]: time="2025-04-30T00:27:46.145879276Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:27:46.148651 containerd[1498]: time="2025-04-30T00:27:46.148612250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:27:46.150529 containerd[1498]: time="2025-04-30T00:27:46.150361175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.430726ms" Apr 30 00:27:46.152036 containerd[1498]: time="2025-04-30T00:27:46.151898193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.904519ms" Apr 30 00:27:46.154256 containerd[1498]: time="2025-04-30T00:27:46.154217656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.695701ms" Apr 30 00:27:46.243294 containerd[1498]: time="2025-04-30T00:27:46.241412121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:27:46.243294 containerd[1498]: time="2025-04-30T00:27:46.243258282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:27:46.243801 containerd[1498]: time="2025-04-30T00:27:46.243281194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:46.243801 containerd[1498]: time="2025-04-30T00:27:46.243367601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:46.247676 containerd[1498]: time="2025-04-30T00:27:46.247484196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:27:46.247676 containerd[1498]: time="2025-04-30T00:27:46.247519059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:27:46.247676 containerd[1498]: time="2025-04-30T00:27:46.247528607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:46.247676 containerd[1498]: time="2025-04-30T00:27:46.247581253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:46.250771 containerd[1498]: time="2025-04-30T00:27:46.250698617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:27:46.252917 containerd[1498]: time="2025-04-30T00:27:46.252630725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:27:46.253313 containerd[1498]: time="2025-04-30T00:27:46.253127609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:46.253599 containerd[1498]: time="2025-04-30T00:27:46.253428057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:46.260157 systemd[1]: Started cri-containerd-09e1f8e889c243ae25beac8bb00674cb2d885f3a52c4f80625755c593af03eda.scope - libcontainer container 09e1f8e889c243ae25beac8bb00674cb2d885f3a52c4f80625755c593af03eda. Apr 30 00:27:46.264376 kubelet[2407]: W0430 00:27:46.263403 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.9.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-b-856bdfce49&limit=500&resourceVersion=0": dial tcp 37.27.9.63:6443: connect: connection refused Apr 30 00:27:46.264666 kubelet[2407]: E0430 00:27:46.264649 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.9.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-b-856bdfce49&limit=500&resourceVersion=0\": dial tcp 37.27.9.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:27:46.277140 systemd[1]: Started cri-containerd-ed60ff404ced9b7c66a382d063e85ba95be4fbd6bdf1fbe6c7105bed2e9e49be.scope - libcontainer container ed60ff404ced9b7c66a382d063e85ba95be4fbd6bdf1fbe6c7105bed2e9e49be. Apr 30 00:27:46.282130 systemd[1]: Started cri-containerd-88904657f2b65aef4e70da8b480c7e82f4ad7d78cbe5fec5da4e01c3fc004ef1.scope - libcontainer container 88904657f2b65aef4e70da8b480c7e82f4ad7d78cbe5fec5da4e01c3fc004ef1. Apr 30 00:27:46.322848 containerd[1498]: time="2025-04-30T00:27:46.322776825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-b-856bdfce49,Uid:2b6c0edf66843b2bbae5344a8079bd14,Namespace:kube-system,Attempt:0,} returns sandbox id \"09e1f8e889c243ae25beac8bb00674cb2d885f3a52c4f80625755c593af03eda\"" Apr 30 00:27:46.325004 containerd[1498]: time="2025-04-30T00:27:46.324882879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-b-856bdfce49,Uid:ca35dd20a3efd2322fe4f597de98f20a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed60ff404ced9b7c66a382d063e85ba95be4fbd6bdf1fbe6c7105bed2e9e49be\"" Apr 30 00:27:46.328418 containerd[1498]: time="2025-04-30T00:27:46.328340233Z" level=info msg="CreateContainer within sandbox \"09e1f8e889c243ae25beac8bb00674cb2d885f3a52c4f80625755c593af03eda\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:27:46.328554 containerd[1498]: time="2025-04-30T00:27:46.328530098Z" level=info msg="CreateContainer within sandbox \"ed60ff404ced9b7c66a382d063e85ba95be4fbd6bdf1fbe6c7105bed2e9e49be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:27:46.335539 containerd[1498]: time="2025-04-30T00:27:46.335470854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-b-856bdfce49,Uid:4db39075849125ceb817e43294f12923,Namespace:kube-system,Attempt:0,} returns sandbox id \"88904657f2b65aef4e70da8b480c7e82f4ad7d78cbe5fec5da4e01c3fc004ef1\"" Apr 30 00:27:46.337876 containerd[1498]: time="2025-04-30T00:27:46.337850877Z" level=info msg="CreateContainer within sandbox \"88904657f2b65aef4e70da8b480c7e82f4ad7d78cbe5fec5da4e01c3fc004ef1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:27:46.345628 containerd[1498]: time="2025-04-30T00:27:46.345528354Z" level=info msg="CreateContainer within sandbox \"09e1f8e889c243ae25beac8bb00674cb2d885f3a52c4f80625755c593af03eda\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91\"" Apr 30 00:27:46.346556 containerd[1498]: time="2025-04-30T00:27:46.346532081Z" level=info msg="StartContainer for \"dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91\"" Apr 30 00:27:46.347470 containerd[1498]: time="2025-04-30T00:27:46.347413516Z" level=info msg="CreateContainer within sandbox \"ed60ff404ced9b7c66a382d063e85ba95be4fbd6bdf1fbe6c7105bed2e9e49be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd\"" Apr 30 00:27:46.347954 containerd[1498]: time="2025-04-30T00:27:46.347926640Z" level=info msg="StartContainer for \"76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd\"" Apr 30 00:27:46.352899 containerd[1498]: time="2025-04-30T00:27:46.352839333Z" level=info msg="CreateContainer within sandbox \"88904657f2b65aef4e70da8b480c7e82f4ad7d78cbe5fec5da4e01c3fc004ef1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f3e6903d614187c17ab05577157f03d9722da5afcb79b74e04318cb87e6e7e7\"" Apr 30 00:27:46.353259 containerd[1498]: time="2025-04-30T00:27:46.353244902Z" level=info msg="StartContainer for \"7f3e6903d614187c17ab05577157f03d9722da5afcb79b74e04318cb87e6e7e7\"" Apr 30 00:27:46.374905 kubelet[2407]: W0430 00:27:46.374683 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.9.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.9.63:6443: connect: connection refused Apr 30 00:27:46.374905 kubelet[2407]: E0430 00:27:46.374745 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.9.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.9.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:27:46.380150 systemd[1]: Started cri-containerd-dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91.scope - libcontainer container dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91. Apr 30 00:27:46.383636 systemd[1]: Started cri-containerd-76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd.scope - libcontainer container 76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd. Apr 30 00:27:46.392136 systemd[1]: Started cri-containerd-7f3e6903d614187c17ab05577157f03d9722da5afcb79b74e04318cb87e6e7e7.scope - libcontainer container 7f3e6903d614187c17ab05577157f03d9722da5afcb79b74e04318cb87e6e7e7. Apr 30 00:27:46.430224 containerd[1498]: time="2025-04-30T00:27:46.430103121Z" level=info msg="StartContainer for \"dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91\" returns successfully" Apr 30 00:27:46.448592 containerd[1498]: time="2025-04-30T00:27:46.448492639Z" level=info msg="StartContainer for \"7f3e6903d614187c17ab05577157f03d9722da5afcb79b74e04318cb87e6e7e7\" returns successfully" Apr 30 00:27:46.456400 containerd[1498]: time="2025-04-30T00:27:46.456368987Z" level=info msg="StartContainer for \"76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd\" returns successfully" Apr 30 00:27:46.823022 kubelet[2407]: I0430 00:27:46.822981 2407 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:47.563152 kubelet[2407]: E0430 00:27:47.561322 2407 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-3-b-856bdfce49\" not found" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:47.760828 kubelet[2407]: I0430 00:27:47.760788 2407 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:47.760959 kubelet[2407]: E0430 00:27:47.760827 2407 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4152-2-3-b-856bdfce49\": node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:47.774851 kubelet[2407]: E0430 00:27:47.774829 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:47.875393 kubelet[2407]: E0430 00:27:47.875317 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:47.976126 kubelet[2407]: E0430 00:27:47.976071 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.076714 kubelet[2407]: E0430 00:27:48.076677 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.177684 kubelet[2407]: E0430 00:27:48.177501 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.277665 kubelet[2407]: E0430 00:27:48.277631 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.378479 kubelet[2407]: E0430 00:27:48.378409 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.479073 kubelet[2407]: E0430 00:27:48.478929 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.579497 kubelet[2407]: E0430 00:27:48.579460 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.681109 kubelet[2407]: E0430 00:27:48.681066 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.781914 kubelet[2407]: E0430 00:27:48.781779 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.882642 kubelet[2407]: E0430 00:27:48.882592 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:48.983332 kubelet[2407]: E0430 00:27:48.983255 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:49.083990 kubelet[2407]: E0430 00:27:49.083929 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:49.184610 kubelet[2407]: E0430 00:27:49.184560 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:49.285409 kubelet[2407]: E0430 00:27:49.285358 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-3-b-856bdfce49\" not found" Apr 30 00:27:49.857058 systemd[1]: Reloading requested from client PID 2678 ('systemctl') (unit session-7.scope)... Apr 30 00:27:49.857082 systemd[1]: Reloading... Apr 30 00:27:49.939088 zram_generator::config[2718]: No configuration found. Apr 30 00:27:50.017789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:27:50.084642 systemd[1]: Reloading finished in 227 ms. Apr 30 00:27:50.118827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:27:50.140494 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:27:50.140654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:27:50.145200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:27:50.233946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:27:50.237194 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:27:50.267873 kubelet[2769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:27:50.268346 kubelet[2769]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:27:50.268427 kubelet[2769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:27:50.268531 kubelet[2769]: I0430 00:27:50.268509 2769 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:27:50.274517 kubelet[2769]: I0430 00:27:50.274492 2769 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 00:27:50.274602 kubelet[2769]: I0430 00:27:50.274578 2769 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:27:50.274794 kubelet[2769]: I0430 00:27:50.274784 2769 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 00:27:50.275889 kubelet[2769]: I0430 00:27:50.275876 2769 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:27:50.277414 kubelet[2769]: I0430 00:27:50.277401 2769 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:27:50.282839 kubelet[2769]: E0430 00:27:50.282820 2769 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:27:50.283518 kubelet[2769]: I0430 00:27:50.283444 2769 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:27:50.286791 kubelet[2769]: I0430 00:27:50.286731 2769 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:27:50.286983 kubelet[2769]: I0430 00:27:50.286925 2769 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 00:27:50.287196 kubelet[2769]: I0430 00:27:50.287141 2769 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:27:50.287443 kubelet[2769]: I0430 00:27:50.287164 2769 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-b-856bdfce49","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:27:50.287443 kubelet[2769]: I0430 00:27:50.287395 2769 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:27:50.287443 kubelet[2769]: I0430 00:27:50.287402 2769 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 00:27:50.287671 kubelet[2769]: I0430 00:27:50.287528 2769 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:27:50.287795 kubelet[2769]: I0430 00:27:50.287723 2769 kubelet.go:408] "Attempting to sync node with API server" Apr 30 00:27:50.287795 kubelet[2769]: I0430 00:27:50.287734 2769 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:27:50.287795 kubelet[2769]: I0430 00:27:50.287752 2769 kubelet.go:314] "Adding apiserver pod source" Apr 30 00:27:50.288026 kubelet[2769]: I0430 00:27:50.287999 2769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:27:50.299055 kubelet[2769]: I0430 00:27:50.298857 2769 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:27:50.299288 kubelet[2769]: I0430 00:27:50.299276 2769 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:27:50.299706 kubelet[2769]: I0430 00:27:50.299694 2769 server.go:1269] "Started kubelet" Apr 30 00:27:50.299942 kubelet[2769]: I0430 00:27:50.299924 2769 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:27:50.300596 kubelet[2769]: I0430 00:27:50.300576 2769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:27:50.301367 kubelet[2769]: I0430 00:27:50.301181 2769 server.go:460] "Adding debug handlers to kubelet server" Apr 30 00:27:50.302532 kubelet[2769]: I0430 00:27:50.302378 2769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:27:50.302855 kubelet[2769]: I0430 00:27:50.302774 2769 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:27:50.306745 kubelet[2769]: I0430 00:27:50.306260 2769 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:27:50.308496 kubelet[2769]: I0430 00:27:50.307708 2769 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 00:27:50.308496 kubelet[2769]: I0430 00:27:50.307768 2769 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 00:27:50.308496 kubelet[2769]: I0430 00:27:50.307850 2769 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:27:50.308718 kubelet[2769]: I0430 00:27:50.308706 2769 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:27:50.308836 kubelet[2769]: I0430 00:27:50.308822 2769 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:27:50.312453 kubelet[2769]: I0430 00:27:50.312430 2769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:27:50.313005 kubelet[2769]: E0430 00:27:50.312975 2769 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:27:50.313625 kubelet[2769]: I0430 00:27:50.313613 2769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:27:50.313704 kubelet[2769]: I0430 00:27:50.313697 2769 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:27:50.313840 kubelet[2769]: I0430 00:27:50.313764 2769 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 00:27:50.313840 kubelet[2769]: E0430 00:27:50.313794 2769 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:27:50.314241 kubelet[2769]: I0430 00:27:50.314220 2769 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:27:50.358065 kubelet[2769]: I0430 00:27:50.358003 2769 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:27:50.358065 kubelet[2769]: I0430 00:27:50.358046 2769 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:27:50.358065 kubelet[2769]: I0430 00:27:50.358060 2769 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:27:50.358217 kubelet[2769]: I0430 00:27:50.358159 2769 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:27:50.358217 kubelet[2769]: I0430 00:27:50.358167 2769 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:27:50.358217 kubelet[2769]: I0430 00:27:50.358181 2769 policy_none.go:49] "None policy: Start" Apr 30 00:27:50.358669 kubelet[2769]: I0430 00:27:50.358637 2769 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:27:50.358749 kubelet[2769]: I0430 00:27:50.358735 2769 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:27:50.359031 kubelet[2769]: I0430 00:27:50.358969 2769 state_mem.go:75] "Updated machine memory state" Apr 30 00:27:50.363056 kubelet[2769]: I0430 00:27:50.362777 2769 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:27:50.363056 kubelet[2769]: I0430 00:27:50.362888 2769 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:27:50.363056 kubelet[2769]: I0430 00:27:50.362896 2769 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:27:50.363416 kubelet[2769]: I0430 00:27:50.363267 2769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:27:50.470984 kubelet[2769]: I0430 00:27:50.470856 2769 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.477986 kubelet[2769]: I0430 00:27:50.477953 2769 kubelet_node_status.go:111] "Node was previously registered" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.478084 kubelet[2769]: I0430 00:27:50.478031 2769 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.609523 kubelet[2769]: I0430 00:27:50.609486 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.609523 kubelet[2769]: I0430 00:27:50.609524 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.609523 kubelet[2769]: I0430 00:27:50.609573 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca35dd20a3efd2322fe4f597de98f20a-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-b-856bdfce49\" (UID: \"ca35dd20a3efd2322fe4f597de98f20a\") " pod="kube-system/kube-scheduler-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.609788 kubelet[2769]: I0430 00:27:50.609590 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4db39075849125ceb817e43294f12923-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-b-856bdfce49\" (UID: \"4db39075849125ceb817e43294f12923\") " pod="kube-system/kube-apiserver-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.609788 kubelet[2769]: I0430 00:27:50.609605 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4db39075849125ceb817e43294f12923-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-b-856bdfce49\" (UID: \"4db39075849125ceb817e43294f12923\") " pod="kube-system/kube-apiserver-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.609788 kubelet[2769]: I0430 00:27:50.609623 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.609788 kubelet[2769]: I0430 00:27:50.609638 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.609788 kubelet[2769]: I0430 00:27:50.609652 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b6c0edf66843b2bbae5344a8079bd14-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-b-856bdfce49\" (UID: \"2b6c0edf66843b2bbae5344a8079bd14\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.609946 kubelet[2769]: I0430 00:27:50.609671 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4db39075849125ceb817e43294f12923-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-b-856bdfce49\" (UID: \"4db39075849125ceb817e43294f12923\") " pod="kube-system/kube-apiserver-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:50.867825 sudo[2799]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:27:50.868107 sudo[2799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:27:51.294909 kubelet[2769]: I0430 00:27:51.294418 2769 apiserver.go:52] "Watching apiserver" Apr 30 00:27:51.308175 kubelet[2769]: I0430 00:27:51.308153 2769 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 00:27:51.349352 sudo[2799]: pam_unix(sudo:session): session closed for user root Apr 30 00:27:51.354856 kubelet[2769]: E0430 00:27:51.354261 2769 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-3-b-856bdfce49\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-3-b-856bdfce49" Apr 30 00:27:51.374405 kubelet[2769]: I0430 00:27:51.374349 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-3-b-856bdfce49" podStartSLOduration=1.374333029 podStartE2EDuration="1.374333029s" podCreationTimestamp="2025-04-30 00:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:27:51.366130533 +0000 UTC m=+1.126188197" watchObservedRunningTime="2025-04-30 00:27:51.374333029 +0000 UTC m=+1.134390693" Apr 30 00:27:51.384070 kubelet[2769]: I0430 00:27:51.384032 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-3-b-856bdfce49" podStartSLOduration=1.384018091 podStartE2EDuration="1.384018091s" podCreationTimestamp="2025-04-30 00:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:27:51.374644462 +0000 UTC m=+1.134702126" watchObservedRunningTime="2025-04-30 00:27:51.384018091 +0000 UTC m=+1.144075755" Apr 30 00:27:51.398439 kubelet[2769]: I0430 00:27:51.398332 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" podStartSLOduration=1.398318231 podStartE2EDuration="1.398318231s" podCreationTimestamp="2025-04-30 00:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:27:51.391177328 +0000 UTC m=+1.151234992" watchObservedRunningTime="2025-04-30 00:27:51.398318231 +0000 UTC m=+1.158375895" Apr 30 00:27:52.513089 sudo[1877]: pam_unix(sudo:session): session closed for user root Apr 30 00:27:52.670227 sshd[1876]: Connection closed by 139.178.89.65 port 42810 Apr 30 00:27:52.671400 sshd-session[1874]: pam_unix(sshd:session): session closed for user core Apr 30 00:27:52.673912 systemd[1]: sshd@6-37.27.9.63:22-139.178.89.65:42810.service: Deactivated successfully. Apr 30 00:27:52.676305 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:27:52.676473 systemd[1]: session-7.scope: Consumed 3.271s CPU time, 140.4M memory peak, 0B memory swap peak. Apr 30 00:27:52.677769 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:27:52.679280 systemd-logind[1474]: Removed session 7. Apr 30 00:27:56.353769 kubelet[2769]: I0430 00:27:56.353721 2769 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:27:56.354203 containerd[1498]: time="2025-04-30T00:27:56.354054270Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:27:56.354525 kubelet[2769]: I0430 00:27:56.354230 2769 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:27:57.412001 systemd[1]: Created slice kubepods-besteffort-pod78a1450b_23c0_4200_ba16_871995bd7331.slice - libcontainer container kubepods-besteffort-pod78a1450b_23c0_4200_ba16_871995bd7331.slice. Apr 30 00:27:57.422583 systemd[1]: Created slice kubepods-burstable-pod0b88508a_f8bf_4158_a4a8_cd11af4aef0f.slice - libcontainer container kubepods-burstable-pod0b88508a_f8bf_4158_a4a8_cd11af4aef0f.slice. Apr 30 00:27:57.456432 kubelet[2769]: I0430 00:27:57.456284 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78a1450b-23c0-4200-ba16-871995bd7331-lib-modules\") pod \"kube-proxy-pcm54\" (UID: \"78a1450b-23c0-4200-ba16-871995bd7331\") " pod="kube-system/kube-proxy-pcm54" Apr 30 00:27:57.456432 kubelet[2769]: I0430 00:27:57.456315 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-hostproc\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.456432 kubelet[2769]: I0430 00:27:57.456330 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-config-path\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.456432 kubelet[2769]: I0430 00:27:57.456342 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-hubble-tls\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.456432 kubelet[2769]: I0430 00:27:57.456353 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gxtm\" (UniqueName: \"kubernetes.io/projected/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-kube-api-access-4gxtm\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.456432 kubelet[2769]: I0430 00:27:57.456365 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78a1450b-23c0-4200-ba16-871995bd7331-xtables-lock\") pod \"kube-proxy-pcm54\" (UID: \"78a1450b-23c0-4200-ba16-871995bd7331\") " pod="kube-system/kube-proxy-pcm54" Apr 30 00:27:57.456854 kubelet[2769]: I0430 00:27:57.456376 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-xtables-lock\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.456854 kubelet[2769]: I0430 00:27:57.456389 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-host-proc-sys-net\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.456854 kubelet[2769]: I0430 00:27:57.456401 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-lib-modules\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.457334 kubelet[2769]: I0430 00:27:57.457060 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-cgroup\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.457334 kubelet[2769]: I0430 00:27:57.457115 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cni-path\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.457334 kubelet[2769]: I0430 00:27:57.457140 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-etc-cni-netd\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.457334 kubelet[2769]: I0430 00:27:57.457197 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-host-proc-sys-kernel\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.457334 kubelet[2769]: I0430 00:27:57.457217 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-run\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.457334 kubelet[2769]: I0430 00:27:57.457229 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-clustermesh-secrets\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.457592 kubelet[2769]: I0430 00:27:57.457239 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78a1450b-23c0-4200-ba16-871995bd7331-kube-proxy\") pod \"kube-proxy-pcm54\" (UID: \"78a1450b-23c0-4200-ba16-871995bd7331\") " pod="kube-system/kube-proxy-pcm54" Apr 30 00:27:57.457592 kubelet[2769]: I0430 00:27:57.457249 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-bpf-maps\") pod \"cilium-pc4tc\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " pod="kube-system/cilium-pc4tc" Apr 30 00:27:57.457592 kubelet[2769]: I0430 00:27:57.457272 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hlb\" (UniqueName: \"kubernetes.io/projected/78a1450b-23c0-4200-ba16-871995bd7331-kube-api-access-48hlb\") pod \"kube-proxy-pcm54\" (UID: \"78a1450b-23c0-4200-ba16-871995bd7331\") " pod="kube-system/kube-proxy-pcm54" Apr 30 00:27:57.488216 systemd[1]: Created slice kubepods-besteffort-pod2036a4e1_a065_4a97_b00d_90f227ad1c4b.slice - libcontainer container kubepods-besteffort-pod2036a4e1_a065_4a97_b00d_90f227ad1c4b.slice. Apr 30 00:27:57.558300 kubelet[2769]: I0430 00:27:57.558196 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2036a4e1-a065-4a97-b00d-90f227ad1c4b-cilium-config-path\") pod \"cilium-operator-5d85765b45-b7dtb\" (UID: \"2036a4e1-a065-4a97-b00d-90f227ad1c4b\") " pod="kube-system/cilium-operator-5d85765b45-b7dtb" Apr 30 00:27:57.558300 kubelet[2769]: I0430 00:27:57.558281 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfqxl\" (UniqueName: \"kubernetes.io/projected/2036a4e1-a065-4a97-b00d-90f227ad1c4b-kube-api-access-gfqxl\") pod \"cilium-operator-5d85765b45-b7dtb\" (UID: \"2036a4e1-a065-4a97-b00d-90f227ad1c4b\") " pod="kube-system/cilium-operator-5d85765b45-b7dtb" Apr 30 00:27:57.720153 containerd[1498]: time="2025-04-30T00:27:57.719918872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pcm54,Uid:78a1450b-23c0-4200-ba16-871995bd7331,Namespace:kube-system,Attempt:0,}" Apr 30 00:27:57.726126 containerd[1498]: time="2025-04-30T00:27:57.725822489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pc4tc,Uid:0b88508a-f8bf-4158-a4a8-cd11af4aef0f,Namespace:kube-system,Attempt:0,}" Apr 30 00:27:57.764070 containerd[1498]: time="2025-04-30T00:27:57.763157426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:27:57.764070 containerd[1498]: time="2025-04-30T00:27:57.763247383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:27:57.764070 containerd[1498]: time="2025-04-30T00:27:57.763262963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:57.764070 containerd[1498]: time="2025-04-30T00:27:57.763368668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:57.769102 containerd[1498]: time="2025-04-30T00:27:57.768465198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:27:57.769102 containerd[1498]: time="2025-04-30T00:27:57.768523677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:27:57.769102 containerd[1498]: time="2025-04-30T00:27:57.768542902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:57.769102 containerd[1498]: time="2025-04-30T00:27:57.768617962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:57.784413 systemd[1]: Started cri-containerd-6b15f794d4320866e204c3fa53c98a14310d46b3eeee98a35f0740370e679169.scope - libcontainer container 6b15f794d4320866e204c3fa53c98a14310d46b3eeee98a35f0740370e679169. Apr 30 00:27:57.790439 systemd[1]: Started cri-containerd-5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1.scope - libcontainer container 5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1. Apr 30 00:27:57.791580 containerd[1498]: time="2025-04-30T00:27:57.791553184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-b7dtb,Uid:2036a4e1-a065-4a97-b00d-90f227ad1c4b,Namespace:kube-system,Attempt:0,}" Apr 30 00:27:57.821864 containerd[1498]: time="2025-04-30T00:27:57.821787839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pcm54,Uid:78a1450b-23c0-4200-ba16-871995bd7331,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b15f794d4320866e204c3fa53c98a14310d46b3eeee98a35f0740370e679169\"" Apr 30 00:27:57.826353 containerd[1498]: time="2025-04-30T00:27:57.826288591Z" level=info msg="CreateContainer within sandbox \"6b15f794d4320866e204c3fa53c98a14310d46b3eeee98a35f0740370e679169\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:27:57.833847 containerd[1498]: time="2025-04-30T00:27:57.833815015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pc4tc,Uid:0b88508a-f8bf-4158-a4a8-cd11af4aef0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\"" Apr 30 00:27:57.835423 containerd[1498]: time="2025-04-30T00:27:57.835245793Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:27:57.839796 containerd[1498]: time="2025-04-30T00:27:57.839483417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:27:57.839796 containerd[1498]: time="2025-04-30T00:27:57.839551193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:27:57.839796 containerd[1498]: time="2025-04-30T00:27:57.839568244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:57.839796 containerd[1498]: time="2025-04-30T00:27:57.839673750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:27:57.844542 containerd[1498]: time="2025-04-30T00:27:57.844520204Z" level=info msg="CreateContainer within sandbox \"6b15f794d4320866e204c3fa53c98a14310d46b3eeee98a35f0740370e679169\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"265ed3bfddf80743abe26953d578a1ab56aa974ffd2394f62db79acb25a7ae37\"" Apr 30 00:27:57.845273 containerd[1498]: time="2025-04-30T00:27:57.845258155Z" level=info msg="StartContainer for \"265ed3bfddf80743abe26953d578a1ab56aa974ffd2394f62db79acb25a7ae37\"" Apr 30 00:27:57.856518 systemd[1]: Started cri-containerd-93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e.scope - libcontainer container 93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e. Apr 30 00:27:57.873123 systemd[1]: Started cri-containerd-265ed3bfddf80743abe26953d578a1ab56aa974ffd2394f62db79acb25a7ae37.scope - libcontainer container 265ed3bfddf80743abe26953d578a1ab56aa974ffd2394f62db79acb25a7ae37. Apr 30 00:27:57.901030 containerd[1498]: time="2025-04-30T00:27:57.900990544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-b7dtb,Uid:2036a4e1-a065-4a97-b00d-90f227ad1c4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\"" Apr 30 00:27:57.906757 containerd[1498]: time="2025-04-30T00:27:57.906725079Z" level=info msg="StartContainer for \"265ed3bfddf80743abe26953d578a1ab56aa974ffd2394f62db79acb25a7ae37\" returns successfully" Apr 30 00:28:00.343277 kubelet[2769]: I0430 00:28:00.342980 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pcm54" podStartSLOduration=3.342964844 podStartE2EDuration="3.342964844s" podCreationTimestamp="2025-04-30 00:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:27:58.370065408 +0000 UTC m=+8.130123101" watchObservedRunningTime="2025-04-30 00:28:00.342964844 +0000 UTC m=+10.103022509" Apr 30 00:28:01.111494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294549293.mount: Deactivated successfully. Apr 30 00:28:02.431302 containerd[1498]: time="2025-04-30T00:28:02.431219010Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:28:02.432711 containerd[1498]: time="2025-04-30T00:28:02.432660737Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 00:28:02.433448 containerd[1498]: time="2025-04-30T00:28:02.432903571Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:28:02.434492 containerd[1498]: time="2025-04-30T00:28:02.434200037Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.598930421s" Apr 30 00:28:02.434492 containerd[1498]: time="2025-04-30T00:28:02.434226988Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 00:28:02.435384 containerd[1498]: time="2025-04-30T00:28:02.435352584Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:28:02.437523 containerd[1498]: time="2025-04-30T00:28:02.437447524Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:28:02.477366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447569318.mount: Deactivated successfully. Apr 30 00:28:02.488499 containerd[1498]: time="2025-04-30T00:28:02.488462128Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\"" Apr 30 00:28:02.489315 containerd[1498]: time="2025-04-30T00:28:02.489092827Z" level=info msg="StartContainer for \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\"" Apr 30 00:28:02.550212 systemd[1]: run-containerd-runc-k8s.io-4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe-runc.gYi8t9.mount: Deactivated successfully. Apr 30 00:28:02.558126 systemd[1]: Started cri-containerd-4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe.scope - libcontainer container 4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe. Apr 30 00:28:02.577311 containerd[1498]: time="2025-04-30T00:28:02.577211488Z" level=info msg="StartContainer for \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\" returns successfully" Apr 30 00:28:02.581990 systemd[1]: cri-containerd-4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe.scope: Deactivated successfully. Apr 30 00:28:02.686718 containerd[1498]: time="2025-04-30T00:28:02.654551539Z" level=info msg="shim disconnected" id=4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe namespace=k8s.io Apr 30 00:28:02.686718 containerd[1498]: time="2025-04-30T00:28:02.686649318Z" level=warning msg="cleaning up after shim disconnected" id=4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe namespace=k8s.io Apr 30 00:28:02.686718 containerd[1498]: time="2025-04-30T00:28:02.686661972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:28:03.387278 containerd[1498]: time="2025-04-30T00:28:03.387215010Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:28:03.406836 containerd[1498]: time="2025-04-30T00:28:03.406767462Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\"" Apr 30 00:28:03.408585 containerd[1498]: time="2025-04-30T00:28:03.407697734Z" level=info msg="StartContainer for \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\"" Apr 30 00:28:03.442188 systemd[1]: Started cri-containerd-2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def.scope - libcontainer container 2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def. Apr 30 00:28:03.467266 containerd[1498]: time="2025-04-30T00:28:03.467174915Z" level=info msg="StartContainer for \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\" returns successfully" Apr 30 00:28:03.480339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe-rootfs.mount: Deactivated successfully. Apr 30 00:28:03.489041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:28:03.489386 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:28:03.489477 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:28:03.497310 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:28:03.497948 systemd[1]: cri-containerd-2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def.scope: Deactivated successfully. Apr 30 00:28:03.519303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def-rootfs.mount: Deactivated successfully. Apr 30 00:28:03.525296 containerd[1498]: time="2025-04-30T00:28:03.525236715Z" level=info msg="shim disconnected" id=2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def namespace=k8s.io Apr 30 00:28:03.525296 containerd[1498]: time="2025-04-30T00:28:03.525294904Z" level=warning msg="cleaning up after shim disconnected" id=2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def namespace=k8s.io Apr 30 00:28:03.525634 containerd[1498]: time="2025-04-30T00:28:03.525306184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:28:03.530752 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:28:04.388898 containerd[1498]: time="2025-04-30T00:28:04.388810192Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:28:04.473762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513872134.mount: Deactivated successfully. Apr 30 00:28:04.477336 containerd[1498]: time="2025-04-30T00:28:04.477278603Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\"" Apr 30 00:28:04.477920 containerd[1498]: time="2025-04-30T00:28:04.477872588Z" level=info msg="StartContainer for \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\"" Apr 30 00:28:04.505192 systemd[1]: Started cri-containerd-d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485.scope - libcontainer container d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485. Apr 30 00:28:04.533881 containerd[1498]: time="2025-04-30T00:28:04.533835574Z" level=info msg="StartContainer for \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\" returns successfully" Apr 30 00:28:04.534526 systemd[1]: cri-containerd-d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485.scope: Deactivated successfully. Apr 30 00:28:04.554579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485-rootfs.mount: Deactivated successfully. Apr 30 00:28:04.560800 containerd[1498]: time="2025-04-30T00:28:04.560726065Z" level=info msg="shim disconnected" id=d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485 namespace=k8s.io Apr 30 00:28:04.560800 containerd[1498]: time="2025-04-30T00:28:04.560774738Z" level=warning msg="cleaning up after shim disconnected" id=d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485 namespace=k8s.io Apr 30 00:28:04.560800 containerd[1498]: time="2025-04-30T00:28:04.560782001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:28:04.741651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272789845.mount: Deactivated successfully. Apr 30 00:28:05.202927 containerd[1498]: time="2025-04-30T00:28:05.202880083Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:28:05.203638 containerd[1498]: time="2025-04-30T00:28:05.203593565Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 00:28:05.204670 containerd[1498]: time="2025-04-30T00:28:05.204628869Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:28:05.205869 containerd[1498]: time="2025-04-30T00:28:05.205825665Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.770440821s" Apr 30 00:28:05.205919 containerd[1498]: time="2025-04-30T00:28:05.205872934Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 00:28:05.212845 containerd[1498]: time="2025-04-30T00:28:05.212726576Z" level=info msg="CreateContainer within sandbox \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:28:05.230430 containerd[1498]: time="2025-04-30T00:28:05.230382155Z" level=info msg="CreateContainer within sandbox \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\"" Apr 30 00:28:05.230908 containerd[1498]: time="2025-04-30T00:28:05.230862065Z" level=info msg="StartContainer for \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\"" Apr 30 00:28:05.252145 systemd[1]: Started cri-containerd-c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266.scope - libcontainer container c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266. Apr 30 00:28:05.276252 containerd[1498]: time="2025-04-30T00:28:05.276208689Z" level=info msg="StartContainer for \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\" returns successfully" Apr 30 00:28:05.415853 containerd[1498]: time="2025-04-30T00:28:05.415734588Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:28:05.430048 containerd[1498]: time="2025-04-30T00:28:05.429980162Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\"" Apr 30 00:28:05.431224 containerd[1498]: time="2025-04-30T00:28:05.430925808Z" level=info msg="StartContainer for \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\"" Apr 30 00:28:05.467734 systemd[1]: Started cri-containerd-7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915.scope - libcontainer container 7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915. Apr 30 00:28:05.541351 containerd[1498]: time="2025-04-30T00:28:05.541260884Z" level=info msg="StartContainer for \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\" returns successfully" Apr 30 00:28:05.542516 systemd[1]: cri-containerd-7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915.scope: Deactivated successfully. Apr 30 00:28:05.556053 kubelet[2769]: I0430 00:28:05.555266 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-b7dtb" podStartSLOduration=1.245654798 podStartE2EDuration="8.555247983s" podCreationTimestamp="2025-04-30 00:27:57 +0000 UTC" firstStartedPulling="2025-04-30 00:27:57.901823752 +0000 UTC m=+7.661881416" lastFinishedPulling="2025-04-30 00:28:05.211416937 +0000 UTC m=+14.971474601" observedRunningTime="2025-04-30 00:28:05.499757007 +0000 UTC m=+15.259814671" watchObservedRunningTime="2025-04-30 00:28:05.555247983 +0000 UTC m=+15.315305647" Apr 30 00:28:05.568743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915-rootfs.mount: Deactivated successfully. Apr 30 00:28:05.592586 containerd[1498]: time="2025-04-30T00:28:05.592495527Z" level=info msg="shim disconnected" id=7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915 namespace=k8s.io Apr 30 00:28:05.592586 containerd[1498]: time="2025-04-30T00:28:05.592573533Z" level=warning msg="cleaning up after shim disconnected" id=7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915 namespace=k8s.io Apr 30 00:28:05.592586 containerd[1498]: time="2025-04-30T00:28:05.592582961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:28:06.419862 containerd[1498]: time="2025-04-30T00:28:06.419815967Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:28:06.436046 containerd[1498]: time="2025-04-30T00:28:06.435991512Z" level=info msg="CreateContainer within sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\"" Apr 30 00:28:06.436946 containerd[1498]: time="2025-04-30T00:28:06.436501180Z" level=info msg="StartContainer for \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\"" Apr 30 00:28:06.466172 systemd[1]: Started cri-containerd-61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b.scope - libcontainer container 61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b. Apr 30 00:28:06.488787 containerd[1498]: time="2025-04-30T00:28:06.488751803Z" level=info msg="StartContainer for \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\" returns successfully" Apr 30 00:28:06.652551 kubelet[2769]: I0430 00:28:06.652325 2769 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 00:28:06.683099 systemd[1]: Created slice kubepods-burstable-pod4b33a225_3c9c_42f0_b781_6b18c15aaa1c.slice - libcontainer container kubepods-burstable-pod4b33a225_3c9c_42f0_b781_6b18c15aaa1c.slice. Apr 30 00:28:06.689751 systemd[1]: Created slice kubepods-burstable-podda42d45b_0348_4060_b9de_11e178824a0b.slice - libcontainer container kubepods-burstable-podda42d45b_0348_4060_b9de_11e178824a0b.slice. Apr 30 00:28:06.718381 kubelet[2769]: I0430 00:28:06.718235 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da42d45b-0348-4060-b9de-11e178824a0b-config-volume\") pod \"coredns-6f6b679f8f-57cqm\" (UID: \"da42d45b-0348-4060-b9de-11e178824a0b\") " pod="kube-system/coredns-6f6b679f8f-57cqm" Apr 30 00:28:06.718381 kubelet[2769]: I0430 00:28:06.718273 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt9vt\" (UniqueName: \"kubernetes.io/projected/da42d45b-0348-4060-b9de-11e178824a0b-kube-api-access-nt9vt\") pod \"coredns-6f6b679f8f-57cqm\" (UID: \"da42d45b-0348-4060-b9de-11e178824a0b\") " pod="kube-system/coredns-6f6b679f8f-57cqm" Apr 30 00:28:06.718381 kubelet[2769]: I0430 00:28:06.718290 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8ts6\" (UniqueName: \"kubernetes.io/projected/4b33a225-3c9c-42f0-b781-6b18c15aaa1c-kube-api-access-w8ts6\") pod \"coredns-6f6b679f8f-xzsjg\" (UID: \"4b33a225-3c9c-42f0-b781-6b18c15aaa1c\") " pod="kube-system/coredns-6f6b679f8f-xzsjg" Apr 30 00:28:06.718381 kubelet[2769]: I0430 00:28:06.718305 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b33a225-3c9c-42f0-b781-6b18c15aaa1c-config-volume\") pod \"coredns-6f6b679f8f-xzsjg\" (UID: \"4b33a225-3c9c-42f0-b781-6b18c15aaa1c\") " pod="kube-system/coredns-6f6b679f8f-xzsjg" Apr 30 00:28:06.988257 containerd[1498]: time="2025-04-30T00:28:06.988162534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xzsjg,Uid:4b33a225-3c9c-42f0-b781-6b18c15aaa1c,Namespace:kube-system,Attempt:0,}" Apr 30 00:28:06.993544 containerd[1498]: time="2025-04-30T00:28:06.993515589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-57cqm,Uid:da42d45b-0348-4060-b9de-11e178824a0b,Namespace:kube-system,Attempt:0,}" Apr 30 00:28:07.449723 kubelet[2769]: I0430 00:28:07.449144 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pc4tc" podStartSLOduration=5.84891712 podStartE2EDuration="10.449130104s" podCreationTimestamp="2025-04-30 00:27:57 +0000 UTC" firstStartedPulling="2025-04-30 00:27:57.834856089 +0000 UTC m=+7.594913753" lastFinishedPulling="2025-04-30 00:28:02.435069073 +0000 UTC m=+12.195126737" observedRunningTime="2025-04-30 00:28:07.447880122 +0000 UTC m=+17.207937786" watchObservedRunningTime="2025-04-30 00:28:07.449130104 +0000 UTC m=+17.209187769" Apr 30 00:28:08.584671 systemd-networkd[1399]: cilium_host: Link UP Apr 30 00:28:08.589201 systemd-networkd[1399]: cilium_net: Link UP Apr 30 00:28:08.590402 systemd-networkd[1399]: cilium_net: Gained carrier Apr 30 00:28:08.591474 systemd-networkd[1399]: cilium_host: Gained carrier Apr 30 00:28:08.591698 systemd-networkd[1399]: cilium_net: Gained IPv6LL Apr 30 00:28:08.591974 systemd-networkd[1399]: cilium_host: Gained IPv6LL Apr 30 00:28:08.685786 systemd-networkd[1399]: cilium_vxlan: Link UP Apr 30 00:28:08.685793 systemd-networkd[1399]: cilium_vxlan: Gained carrier Apr 30 00:28:09.009148 kernel: NET: Registered PF_ALG protocol family Apr 30 00:28:09.595436 systemd-networkd[1399]: lxc_health: Link UP Apr 30 00:28:09.604580 systemd-networkd[1399]: lxc_health: Gained carrier Apr 30 00:28:10.062071 systemd-networkd[1399]: lxc1d1bd8a0693c: Link UP Apr 30 00:28:10.067638 kernel: eth0: renamed from tmpc98bf Apr 30 00:28:10.076086 systemd-networkd[1399]: lxcea46d604aaf8: Link UP Apr 30 00:28:10.081077 kernel: eth0: renamed from tmp9668f Apr 30 00:28:10.084897 systemd-networkd[1399]: lxc1d1bd8a0693c: Gained carrier Apr 30 00:28:10.087111 systemd-networkd[1399]: lxcea46d604aaf8: Gained carrier Apr 30 00:28:10.129172 systemd-networkd[1399]: cilium_vxlan: Gained IPv6LL Apr 30 00:28:11.283620 systemd-networkd[1399]: lxcea46d604aaf8: Gained IPv6LL Apr 30 00:28:11.473198 systemd-networkd[1399]: lxc_health: Gained IPv6LL Apr 30 00:28:11.857239 systemd-networkd[1399]: lxc1d1bd8a0693c: Gained IPv6LL Apr 30 00:28:13.146959 containerd[1498]: time="2025-04-30T00:28:13.142151288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:28:13.146959 containerd[1498]: time="2025-04-30T00:28:13.142233215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:28:13.146959 containerd[1498]: time="2025-04-30T00:28:13.142253813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:28:13.146959 containerd[1498]: time="2025-04-30T00:28:13.142320660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:28:13.167138 systemd[1]: Started cri-containerd-c98bf339478b4518b969576f64995fd4dcd58f02d8b6afa7b242c897eef2c246.scope - libcontainer container c98bf339478b4518b969576f64995fd4dcd58f02d8b6afa7b242c897eef2c246. Apr 30 00:28:13.195180 containerd[1498]: time="2025-04-30T00:28:13.195068366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:28:13.195180 containerd[1498]: time="2025-04-30T00:28:13.195148567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:28:13.195395 containerd[1498]: time="2025-04-30T00:28:13.195159268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:28:13.195395 containerd[1498]: time="2025-04-30T00:28:13.195236023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:28:13.226180 systemd[1]: run-containerd-runc-k8s.io-9668faf352f8622f1a15635e6ca3bc0bc8bca7e667148751cc7144ad70a6c0fc-runc.2A5nQn.mount: Deactivated successfully. Apr 30 00:28:13.236186 systemd[1]: Started cri-containerd-9668faf352f8622f1a15635e6ca3bc0bc8bca7e667148751cc7144ad70a6c0fc.scope - libcontainer container 9668faf352f8622f1a15635e6ca3bc0bc8bca7e667148751cc7144ad70a6c0fc. Apr 30 00:28:13.240808 containerd[1498]: time="2025-04-30T00:28:13.240470251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-57cqm,Uid:da42d45b-0348-4060-b9de-11e178824a0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c98bf339478b4518b969576f64995fd4dcd58f02d8b6afa7b242c897eef2c246\"" Apr 30 00:28:13.247815 containerd[1498]: time="2025-04-30T00:28:13.247624809Z" level=info msg="CreateContainer within sandbox \"c98bf339478b4518b969576f64995fd4dcd58f02d8b6afa7b242c897eef2c246\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:28:13.288655 containerd[1498]: time="2025-04-30T00:28:13.288542394Z" level=info msg="CreateContainer within sandbox \"c98bf339478b4518b969576f64995fd4dcd58f02d8b6afa7b242c897eef2c246\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5d3e9cd2205e8f1354e08b82c82216d48e925cf6c34b3bb2c2c6b5268d753b2\"" Apr 30 00:28:13.290490 containerd[1498]: time="2025-04-30T00:28:13.289526036Z" level=info msg="StartContainer for \"e5d3e9cd2205e8f1354e08b82c82216d48e925cf6c34b3bb2c2c6b5268d753b2\"" Apr 30 00:28:13.314721 containerd[1498]: time="2025-04-30T00:28:13.314680731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xzsjg,Uid:4b33a225-3c9c-42f0-b781-6b18c15aaa1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9668faf352f8622f1a15635e6ca3bc0bc8bca7e667148751cc7144ad70a6c0fc\"" Apr 30 00:28:13.318980 containerd[1498]: time="2025-04-30T00:28:13.318683942Z" level=info msg="CreateContainer within sandbox \"9668faf352f8622f1a15635e6ca3bc0bc8bca7e667148751cc7144ad70a6c0fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:28:13.329163 systemd[1]: Started cri-containerd-e5d3e9cd2205e8f1354e08b82c82216d48e925cf6c34b3bb2c2c6b5268d753b2.scope - libcontainer container e5d3e9cd2205e8f1354e08b82c82216d48e925cf6c34b3bb2c2c6b5268d753b2. Apr 30 00:28:13.337393 containerd[1498]: time="2025-04-30T00:28:13.337357674Z" level=info msg="CreateContainer within sandbox \"9668faf352f8622f1a15635e6ca3bc0bc8bca7e667148751cc7144ad70a6c0fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5a694bb21232f61a981bdbee90d35d367c88c385ecd35aca62b2629ed9868ab\"" Apr 30 00:28:13.338199 containerd[1498]: time="2025-04-30T00:28:13.338106893Z" level=info msg="StartContainer for \"b5a694bb21232f61a981bdbee90d35d367c88c385ecd35aca62b2629ed9868ab\"" Apr 30 00:28:13.363377 systemd[1]: Started cri-containerd-b5a694bb21232f61a981bdbee90d35d367c88c385ecd35aca62b2629ed9868ab.scope - libcontainer container b5a694bb21232f61a981bdbee90d35d367c88c385ecd35aca62b2629ed9868ab. Apr 30 00:28:13.367128 containerd[1498]: time="2025-04-30T00:28:13.365980135Z" level=info msg="StartContainer for \"e5d3e9cd2205e8f1354e08b82c82216d48e925cf6c34b3bb2c2c6b5268d753b2\" returns successfully" Apr 30 00:28:13.385790 containerd[1498]: time="2025-04-30T00:28:13.385732819Z" level=info msg="StartContainer for \"b5a694bb21232f61a981bdbee90d35d367c88c385ecd35aca62b2629ed9868ab\" returns successfully" Apr 30 00:28:13.458104 kubelet[2769]: I0430 00:28:13.457954 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-57cqm" podStartSLOduration=16.457939655 podStartE2EDuration="16.457939655s" podCreationTimestamp="2025-04-30 00:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:28:13.457388582 +0000 UTC m=+23.217446246" watchObservedRunningTime="2025-04-30 00:28:13.457939655 +0000 UTC m=+23.217997319" Apr 30 00:28:13.479113 kubelet[2769]: I0430 00:28:13.478564 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xzsjg" podStartSLOduration=16.478545585 podStartE2EDuration="16.478545585s" podCreationTimestamp="2025-04-30 00:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:28:13.476200956 +0000 UTC m=+23.236258620" watchObservedRunningTime="2025-04-30 00:28:13.478545585 +0000 UTC m=+23.238603249" Apr 30 00:32:27.449642 systemd[1]: Started sshd@8-37.27.9.63:22-139.178.89.65:47362.service - OpenSSH per-connection server daemon (139.178.89.65:47362). Apr 30 00:32:28.439067 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 47362 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:32:28.440792 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:32:28.445163 systemd-logind[1474]: New session 8 of user core. Apr 30 00:32:28.449136 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:32:29.544959 sshd[4181]: Connection closed by 139.178.89.65 port 47362 Apr 30 00:32:29.545534 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Apr 30 00:32:29.548143 systemd[1]: sshd@8-37.27.9.63:22-139.178.89.65:47362.service: Deactivated successfully. Apr 30 00:32:29.549887 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:32:29.550930 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:32:29.552541 systemd-logind[1474]: Removed session 8. Apr 30 00:32:34.714971 systemd[1]: Started sshd@9-37.27.9.63:22-139.178.89.65:47364.service - OpenSSH per-connection server daemon (139.178.89.65:47364). Apr 30 00:32:35.685712 sshd[4193]: Accepted publickey for core from 139.178.89.65 port 47364 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:32:35.687042 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:32:35.691453 systemd-logind[1474]: New session 9 of user core. Apr 30 00:32:35.694201 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:32:36.428867 sshd[4195]: Connection closed by 139.178.89.65 port 47364 Apr 30 00:32:36.429534 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Apr 30 00:32:36.433513 systemd[1]: sshd@9-37.27.9.63:22-139.178.89.65:47364.service: Deactivated successfully. Apr 30 00:32:36.435413 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:32:36.436180 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:32:36.437221 systemd-logind[1474]: Removed session 9. Apr 30 00:32:41.602447 systemd[1]: Started sshd@10-37.27.9.63:22-139.178.89.65:60718.service - OpenSSH per-connection server daemon (139.178.89.65:60718). Apr 30 00:32:42.578629 sshd[4207]: Accepted publickey for core from 139.178.89.65 port 60718 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:32:42.580710 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:32:42.588100 systemd-logind[1474]: New session 10 of user core. Apr 30 00:32:42.597769 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:32:43.332422 sshd[4209]: Connection closed by 139.178.89.65 port 60718 Apr 30 00:32:43.333285 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Apr 30 00:32:43.336309 systemd[1]: sshd@10-37.27.9.63:22-139.178.89.65:60718.service: Deactivated successfully. Apr 30 00:32:43.337914 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:32:43.338644 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:32:43.339640 systemd-logind[1474]: Removed session 10. Apr 30 00:32:43.509380 systemd[1]: Started sshd@11-37.27.9.63:22-139.178.89.65:60726.service - OpenSSH per-connection server daemon (139.178.89.65:60726). Apr 30 00:32:44.488804 sshd[4221]: Accepted publickey for core from 139.178.89.65 port 60726 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:32:44.490180 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:32:44.495085 systemd-logind[1474]: New session 11 of user core. Apr 30 00:32:44.499344 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:32:45.276509 sshd[4223]: Connection closed by 139.178.89.65 port 60726 Apr 30 00:32:45.277299 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Apr 30 00:32:45.283172 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:32:45.283778 systemd[1]: sshd@11-37.27.9.63:22-139.178.89.65:60726.service: Deactivated successfully. Apr 30 00:32:45.285237 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:32:45.286060 systemd-logind[1474]: Removed session 11. Apr 30 00:32:45.443896 systemd[1]: Started sshd@12-37.27.9.63:22-139.178.89.65:60740.service - OpenSSH per-connection server daemon (139.178.89.65:60740). Apr 30 00:32:46.431949 sshd[4232]: Accepted publickey for core from 139.178.89.65 port 60740 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:32:46.433871 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:32:46.438609 systemd-logind[1474]: New session 12 of user core. Apr 30 00:32:46.445137 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:32:47.194535 sshd[4234]: Connection closed by 139.178.89.65 port 60740 Apr 30 00:32:47.195237 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Apr 30 00:32:47.197924 systemd[1]: sshd@12-37.27.9.63:22-139.178.89.65:60740.service: Deactivated successfully. Apr 30 00:32:47.199789 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:32:47.201236 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:32:47.202643 systemd-logind[1474]: Removed session 12. Apr 30 00:32:52.360813 systemd[1]: Started sshd@13-37.27.9.63:22-139.178.89.65:34600.service - OpenSSH per-connection server daemon (139.178.89.65:34600). Apr 30 00:32:53.336288 sshd[4247]: Accepted publickey for core from 139.178.89.65 port 34600 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:32:53.338301 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:32:53.345910 systemd-logind[1474]: New session 13 of user core. Apr 30 00:32:53.356272 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:32:54.097148 sshd[4249]: Connection closed by 139.178.89.65 port 34600 Apr 30 00:32:54.097964 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Apr 30 00:32:54.102342 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:32:54.103168 systemd[1]: sshd@13-37.27.9.63:22-139.178.89.65:34600.service: Deactivated successfully. Apr 30 00:32:54.105392 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:32:54.106682 systemd-logind[1474]: Removed session 13. Apr 30 00:32:54.278398 systemd[1]: Started sshd@14-37.27.9.63:22-139.178.89.65:34614.service - OpenSSH per-connection server daemon (139.178.89.65:34614). Apr 30 00:32:55.255708 sshd[4261]: Accepted publickey for core from 139.178.89.65 port 34614 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:32:55.257653 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:32:55.265232 systemd-logind[1474]: New session 14 of user core. Apr 30 00:32:55.270272 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:32:56.267715 sshd[4263]: Connection closed by 139.178.89.65 port 34614 Apr 30 00:32:56.269643 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Apr 30 00:32:56.274175 systemd[1]: sshd@14-37.27.9.63:22-139.178.89.65:34614.service: Deactivated successfully. Apr 30 00:32:56.276418 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:32:56.277804 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:32:56.279996 systemd-logind[1474]: Removed session 14. Apr 30 00:32:56.438467 systemd[1]: Started sshd@15-37.27.9.63:22-139.178.89.65:34618.service - OpenSSH per-connection server daemon (139.178.89.65:34618). Apr 30 00:32:57.417777 sshd[4271]: Accepted publickey for core from 139.178.89.65 port 34618 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:32:57.419108 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:32:57.423571 systemd-logind[1474]: New session 15 of user core. Apr 30 00:32:57.426157 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:32:59.727380 sshd[4273]: Connection closed by 139.178.89.65 port 34618 Apr 30 00:32:59.728404 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Apr 30 00:32:59.734945 systemd[1]: sshd@15-37.27.9.63:22-139.178.89.65:34618.service: Deactivated successfully. Apr 30 00:32:59.738071 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:32:59.739068 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:32:59.740902 systemd-logind[1474]: Removed session 15. Apr 30 00:32:59.902579 systemd[1]: Started sshd@16-37.27.9.63:22-139.178.89.65:56928.service - OpenSSH per-connection server daemon (139.178.89.65:56928). Apr 30 00:33:00.876514 sshd[4291]: Accepted publickey for core from 139.178.89.65 port 56928 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:33:00.877862 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:33:00.882340 systemd-logind[1474]: New session 16 of user core. Apr 30 00:33:00.891184 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:33:01.775596 sshd[4293]: Connection closed by 139.178.89.65 port 56928 Apr 30 00:33:01.776277 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Apr 30 00:33:01.779728 systemd[1]: sshd@16-37.27.9.63:22-139.178.89.65:56928.service: Deactivated successfully. Apr 30 00:33:01.780980 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:33:01.781772 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:33:01.783043 systemd-logind[1474]: Removed session 16. Apr 30 00:33:01.944292 systemd[1]: Started sshd@17-37.27.9.63:22-139.178.89.65:56944.service - OpenSSH per-connection server daemon (139.178.89.65:56944). Apr 30 00:33:02.911233 sshd[4302]: Accepted publickey for core from 139.178.89.65 port 56944 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:33:02.912695 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:33:02.917342 systemd-logind[1474]: New session 17 of user core. Apr 30 00:33:02.924174 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:33:03.647240 sshd[4304]: Connection closed by 139.178.89.65 port 56944 Apr 30 00:33:03.647806 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Apr 30 00:33:03.651127 systemd[1]: sshd@17-37.27.9.63:22-139.178.89.65:56944.service: Deactivated successfully. Apr 30 00:33:03.652889 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:33:03.653602 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:33:03.654668 systemd-logind[1474]: Removed session 17. Apr 30 00:33:08.820466 systemd[1]: Started sshd@18-37.27.9.63:22-139.178.89.65:55042.service - OpenSSH per-connection server daemon (139.178.89.65:55042). Apr 30 00:33:09.803858 sshd[4318]: Accepted publickey for core from 139.178.89.65 port 55042 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:33:09.805671 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:33:09.812915 systemd-logind[1474]: New session 18 of user core. Apr 30 00:33:09.819219 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:33:10.537171 sshd[4320]: Connection closed by 139.178.89.65 port 55042 Apr 30 00:33:10.537847 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Apr 30 00:33:10.540763 systemd[1]: sshd@18-37.27.9.63:22-139.178.89.65:55042.service: Deactivated successfully. Apr 30 00:33:10.543048 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:33:10.544703 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:33:10.546412 systemd-logind[1474]: Removed session 18. Apr 30 00:33:15.702191 systemd[1]: Started sshd@19-37.27.9.63:22-139.178.89.65:55046.service - OpenSSH per-connection server daemon (139.178.89.65:55046). Apr 30 00:33:16.677259 sshd[4331]: Accepted publickey for core from 139.178.89.65 port 55046 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:33:16.679220 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:33:16.687168 systemd-logind[1474]: New session 19 of user core. Apr 30 00:33:16.693274 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:33:17.446892 sshd[4333]: Connection closed by 139.178.89.65 port 55046 Apr 30 00:33:17.447504 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Apr 30 00:33:17.449921 systemd[1]: sshd@19-37.27.9.63:22-139.178.89.65:55046.service: Deactivated successfully. Apr 30 00:33:17.451956 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:33:17.452764 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:33:17.453699 systemd-logind[1474]: Removed session 19. Apr 30 00:33:17.614691 systemd[1]: Started sshd@20-37.27.9.63:22-139.178.89.65:49090.service - OpenSSH per-connection server daemon (139.178.89.65:49090). Apr 30 00:33:18.589964 sshd[4344]: Accepted publickey for core from 139.178.89.65 port 49090 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:33:18.591390 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:33:18.595404 systemd-logind[1474]: New session 20 of user core. Apr 30 00:33:18.602151 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:33:20.468934 containerd[1498]: time="2025-04-30T00:33:20.468889287Z" level=info msg="StopContainer for \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\" with timeout 30 (s)" Apr 30 00:33:20.470897 containerd[1498]: time="2025-04-30T00:33:20.470790183Z" level=info msg="Stop container \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\" with signal terminated" Apr 30 00:33:20.476451 containerd[1498]: time="2025-04-30T00:33:20.476419502Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:33:20.482125 systemd[1]: cri-containerd-c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266.scope: Deactivated successfully. Apr 30 00:33:20.491282 containerd[1498]: time="2025-04-30T00:33:20.490900790Z" level=info msg="StopContainer for \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\" with timeout 2 (s)" Apr 30 00:33:20.492245 containerd[1498]: time="2025-04-30T00:33:20.492221441Z" level=info msg="Stop container \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\" with signal terminated" Apr 30 00:33:20.504690 systemd-networkd[1399]: lxc_health: Link DOWN Apr 30 00:33:20.504697 systemd-networkd[1399]: lxc_health: Lost carrier Apr 30 00:33:20.519548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266-rootfs.mount: Deactivated successfully. Apr 30 00:33:20.531867 containerd[1498]: time="2025-04-30T00:33:20.531795250Z" level=info msg="shim disconnected" id=c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266 namespace=k8s.io Apr 30 00:33:20.531867 containerd[1498]: time="2025-04-30T00:33:20.531842059Z" level=warning msg="cleaning up after shim disconnected" id=c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266 namespace=k8s.io Apr 30 00:33:20.531867 containerd[1498]: time="2025-04-30T00:33:20.531851166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:20.535187 systemd[1]: cri-containerd-61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b.scope: Deactivated successfully. Apr 30 00:33:20.535370 systemd[1]: cri-containerd-61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b.scope: Consumed 6.475s CPU time. Apr 30 00:33:20.547114 containerd[1498]: time="2025-04-30T00:33:20.546958549Z" level=info msg="StopContainer for \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\" returns successfully" Apr 30 00:33:20.560225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b-rootfs.mount: Deactivated successfully. Apr 30 00:33:20.562033 containerd[1498]: time="2025-04-30T00:33:20.561335997Z" level=info msg="StopPodSandbox for \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\"" Apr 30 00:33:20.562466 containerd[1498]: time="2025-04-30T00:33:20.562399102Z" level=info msg="Container to stop \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:33:20.565257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e-shm.mount: Deactivated successfully. Apr 30 00:33:20.570178 containerd[1498]: time="2025-04-30T00:33:20.570125635Z" level=info msg="shim disconnected" id=61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b namespace=k8s.io Apr 30 00:33:20.570178 containerd[1498]: time="2025-04-30T00:33:20.570165111Z" level=warning msg="cleaning up after shim disconnected" id=61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b namespace=k8s.io Apr 30 00:33:20.570178 containerd[1498]: time="2025-04-30T00:33:20.570173107Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:20.571371 systemd[1]: cri-containerd-93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e.scope: Deactivated successfully. Apr 30 00:33:20.580839 containerd[1498]: time="2025-04-30T00:33:20.580816902Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:33:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:33:20.583091 containerd[1498]: time="2025-04-30T00:33:20.583073613Z" level=info msg="StopContainer for \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\" returns successfully" Apr 30 00:33:20.583993 containerd[1498]: time="2025-04-30T00:33:20.583690780Z" level=info msg="StopPodSandbox for \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\"" Apr 30 00:33:20.584146 containerd[1498]: time="2025-04-30T00:33:20.584111801Z" level=info msg="Container to stop \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:33:20.584258 containerd[1498]: time="2025-04-30T00:33:20.584244586Z" level=info msg="Container to stop \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:33:20.584310 containerd[1498]: time="2025-04-30T00:33:20.584299902Z" level=info msg="Container to stop \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:33:20.584397 containerd[1498]: time="2025-04-30T00:33:20.584376280Z" level=info msg="Container to stop \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:33:20.584456 containerd[1498]: time="2025-04-30T00:33:20.584444971Z" level=info msg="Container to stop \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:33:20.585892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1-shm.mount: Deactivated successfully. Apr 30 00:33:20.592032 systemd[1]: cri-containerd-5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1.scope: Deactivated successfully. Apr 30 00:33:20.603865 containerd[1498]: time="2025-04-30T00:33:20.603750069Z" level=info msg="shim disconnected" id=93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e namespace=k8s.io Apr 30 00:33:20.603865 containerd[1498]: time="2025-04-30T00:33:20.603787801Z" level=warning msg="cleaning up after shim disconnected" id=93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e namespace=k8s.io Apr 30 00:33:20.603865 containerd[1498]: time="2025-04-30T00:33:20.603795014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:20.612520 containerd[1498]: time="2025-04-30T00:33:20.612274446Z" level=info msg="shim disconnected" id=5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1 namespace=k8s.io Apr 30 00:33:20.612520 containerd[1498]: time="2025-04-30T00:33:20.612312851Z" level=warning msg="cleaning up after shim disconnected" id=5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1 namespace=k8s.io Apr 30 00:33:20.612520 containerd[1498]: time="2025-04-30T00:33:20.612320185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:20.616746 containerd[1498]: time="2025-04-30T00:33:20.616596350Z" level=info msg="TearDown network for sandbox \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\" successfully" Apr 30 00:33:20.616746 containerd[1498]: time="2025-04-30T00:33:20.616618162Z" level=info msg="StopPodSandbox for \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\" returns successfully" Apr 30 00:33:20.632912 containerd[1498]: time="2025-04-30T00:33:20.632883812Z" level=info msg="TearDown network for sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" successfully" Apr 30 00:33:20.633071 containerd[1498]: time="2025-04-30T00:33:20.633045684Z" level=info msg="StopPodSandbox for \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" returns successfully" Apr 30 00:33:20.739762 kubelet[2769]: I0430 00:33:20.738655 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gxtm\" (UniqueName: \"kubernetes.io/projected/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-kube-api-access-4gxtm\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.739762 kubelet[2769]: I0430 00:33:20.738713 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-config-path\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.739762 kubelet[2769]: I0430 00:33:20.738733 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfqxl\" (UniqueName: \"kubernetes.io/projected/2036a4e1-a065-4a97-b00d-90f227ad1c4b-kube-api-access-gfqxl\") pod \"2036a4e1-a065-4a97-b00d-90f227ad1c4b\" (UID: \"2036a4e1-a065-4a97-b00d-90f227ad1c4b\") " Apr 30 00:33:20.739762 kubelet[2769]: I0430 00:33:20.738749 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-lib-modules\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.739762 kubelet[2769]: I0430 00:33:20.738765 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-etc-cni-netd\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.739762 kubelet[2769]: I0430 00:33:20.738780 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-clustermesh-secrets\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.740715 kubelet[2769]: I0430 00:33:20.738795 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2036a4e1-a065-4a97-b00d-90f227ad1c4b-cilium-config-path\") pod \"2036a4e1-a065-4a97-b00d-90f227ad1c4b\" (UID: \"2036a4e1-a065-4a97-b00d-90f227ad1c4b\") " Apr 30 00:33:20.740715 kubelet[2769]: I0430 00:33:20.738809 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-hostproc\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.740715 kubelet[2769]: I0430 00:33:20.738822 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-cgroup\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.740715 kubelet[2769]: I0430 00:33:20.738834 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-xtables-lock\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.740715 kubelet[2769]: I0430 00:33:20.738847 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-run\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.740715 kubelet[2769]: I0430 00:33:20.738859 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-bpf-maps\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.740866 kubelet[2769]: I0430 00:33:20.738874 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-host-proc-sys-kernel\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.740866 kubelet[2769]: I0430 00:33:20.738890 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-hubble-tls\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.740866 kubelet[2769]: I0430 00:33:20.738903 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-host-proc-sys-net\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.740866 kubelet[2769]: I0430 00:33:20.738917 2769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cni-path\") pod \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\" (UID: \"0b88508a-f8bf-4158-a4a8-cd11af4aef0f\") " Apr 30 00:33:20.741304 kubelet[2769]: I0430 00:33:20.740158 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.743880 kubelet[2769]: I0430 00:33:20.743391 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:33:20.743880 kubelet[2769]: I0430 00:33:20.740090 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.743880 kubelet[2769]: I0430 00:33:20.743585 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.743880 kubelet[2769]: I0430 00:33:20.743609 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.743880 kubelet[2769]: I0430 00:33:20.743622 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.744062 kubelet[2769]: I0430 00:33:20.743634 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.744062 kubelet[2769]: I0430 00:33:20.743646 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.744487 kubelet[2769]: I0430 00:33:20.744279 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.744487 kubelet[2769]: I0430 00:33:20.744303 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.747762 kubelet[2769]: I0430 00:33:20.747115 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:33:20.747762 kubelet[2769]: I0430 00:33:20.747201 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-kube-api-access-4gxtm" (OuterVolumeSpecName: "kube-api-access-4gxtm") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "kube-api-access-4gxtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:33:20.747903 kubelet[2769]: I0430 00:33:20.747874 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:33:20.747961 kubelet[2769]: I0430 00:33:20.747938 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b88508a-f8bf-4158-a4a8-cd11af4aef0f" (UID: "0b88508a-f8bf-4158-a4a8-cd11af4aef0f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:33:20.748059 kubelet[2769]: I0430 00:33:20.748025 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2036a4e1-a065-4a97-b00d-90f227ad1c4b-kube-api-access-gfqxl" (OuterVolumeSpecName: "kube-api-access-gfqxl") pod "2036a4e1-a065-4a97-b00d-90f227ad1c4b" (UID: "2036a4e1-a065-4a97-b00d-90f227ad1c4b"). InnerVolumeSpecName "kube-api-access-gfqxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:33:20.749102 kubelet[2769]: I0430 00:33:20.749074 2769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2036a4e1-a065-4a97-b00d-90f227ad1c4b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2036a4e1-a065-4a97-b00d-90f227ad1c4b" (UID: "2036a4e1-a065-4a97-b00d-90f227ad1c4b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:33:20.841624 kubelet[2769]: I0430 00:33:20.841581 2769 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2036a4e1-a065-4a97-b00d-90f227ad1c4b-cilium-config-path\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841624 kubelet[2769]: I0430 00:33:20.841619 2769 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-etc-cni-netd\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841624 kubelet[2769]: I0430 00:33:20.841634 2769 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-clustermesh-secrets\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841825 kubelet[2769]: I0430 00:33:20.841645 2769 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-hostproc\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841825 kubelet[2769]: I0430 00:33:20.841656 2769 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-cgroup\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841825 kubelet[2769]: I0430 00:33:20.841666 2769 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-xtables-lock\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841825 kubelet[2769]: I0430 00:33:20.841673 2769 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-run\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841825 kubelet[2769]: I0430 00:33:20.841680 2769 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-bpf-maps\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841825 kubelet[2769]: I0430 00:33:20.841688 2769 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-host-proc-sys-kernel\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841825 kubelet[2769]: I0430 00:33:20.841696 2769 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-hubble-tls\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841825 kubelet[2769]: I0430 00:33:20.841704 2769 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-host-proc-sys-net\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841981 kubelet[2769]: I0430 00:33:20.841711 2769 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cni-path\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841981 kubelet[2769]: I0430 00:33:20.841718 2769 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4gxtm\" (UniqueName: \"kubernetes.io/projected/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-kube-api-access-4gxtm\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841981 kubelet[2769]: I0430 00:33:20.841725 2769 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-cilium-config-path\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841981 kubelet[2769]: I0430 00:33:20.841735 2769 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gfqxl\" (UniqueName: \"kubernetes.io/projected/2036a4e1-a065-4a97-b00d-90f227ad1c4b-kube-api-access-gfqxl\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:20.841981 kubelet[2769]: I0430 00:33:20.841743 2769 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b88508a-f8bf-4158-a4a8-cd11af4aef0f-lib-modules\") on node \"ci-4152-2-3-b-856bdfce49\" DevicePath \"\"" Apr 30 00:33:21.015422 systemd[1]: Removed slice kubepods-besteffort-pod2036a4e1_a065_4a97_b00d_90f227ad1c4b.slice - libcontainer container kubepods-besteffort-pod2036a4e1_a065_4a97_b00d_90f227ad1c4b.slice. Apr 30 00:33:21.019417 kubelet[2769]: I0430 00:33:21.019379 2769 scope.go:117] "RemoveContainer" containerID="c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266" Apr 30 00:33:21.028069 containerd[1498]: time="2025-04-30T00:33:21.027768955Z" level=info msg="RemoveContainer for \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\"" Apr 30 00:33:21.034184 containerd[1498]: time="2025-04-30T00:33:21.033923455Z" level=info msg="RemoveContainer for \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\" returns successfully" Apr 30 00:33:21.039260 kubelet[2769]: I0430 00:33:21.038834 2769 scope.go:117] "RemoveContainer" containerID="c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266" Apr 30 00:33:21.040241 containerd[1498]: time="2025-04-30T00:33:21.039812555Z" level=error msg="ContainerStatus for \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\": not found" Apr 30 00:33:21.041557 systemd[1]: Removed slice kubepods-burstable-pod0b88508a_f8bf_4158_a4a8_cd11af4aef0f.slice - libcontainer container kubepods-burstable-pod0b88508a_f8bf_4158_a4a8_cd11af4aef0f.slice. Apr 30 00:33:21.041862 systemd[1]: kubepods-burstable-pod0b88508a_f8bf_4158_a4a8_cd11af4aef0f.slice: Consumed 6.544s CPU time. Apr 30 00:33:21.044063 kubelet[2769]: E0430 00:33:21.043996 2769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\": not found" containerID="c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266" Apr 30 00:33:21.044502 kubelet[2769]: I0430 00:33:21.044155 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266"} err="failed to get container status \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7b5ccc4c58b46eb01afa8433af9a7d2f6143c47c2078d3d473686467fbc9266\": not found" Apr 30 00:33:21.044502 kubelet[2769]: I0430 00:33:21.044279 2769 scope.go:117] "RemoveContainer" containerID="61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b" Apr 30 00:33:21.046062 containerd[1498]: time="2025-04-30T00:33:21.046039574Z" level=info msg="RemoveContainer for \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\"" Apr 30 00:33:21.050136 containerd[1498]: time="2025-04-30T00:33:21.050090156Z" level=info msg="RemoveContainer for \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\" returns successfully" Apr 30 00:33:21.050458 kubelet[2769]: I0430 00:33:21.050340 2769 scope.go:117] "RemoveContainer" containerID="7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915" Apr 30 00:33:21.051277 containerd[1498]: time="2025-04-30T00:33:21.051243975Z" level=info msg="RemoveContainer for \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\"" Apr 30 00:33:21.055383 containerd[1498]: time="2025-04-30T00:33:21.054693783Z" level=info msg="RemoveContainer for \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\" returns successfully" Apr 30 00:33:21.055435 kubelet[2769]: I0430 00:33:21.054898 2769 scope.go:117] "RemoveContainer" containerID="d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485" Apr 30 00:33:21.057097 containerd[1498]: time="2025-04-30T00:33:21.057061677Z" level=info msg="RemoveContainer for \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\"" Apr 30 00:33:21.059756 containerd[1498]: time="2025-04-30T00:33:21.059711014Z" level=info msg="RemoveContainer for \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\" returns successfully" Apr 30 00:33:21.059904 kubelet[2769]: I0430 00:33:21.059887 2769 scope.go:117] "RemoveContainer" containerID="2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def" Apr 30 00:33:21.060923 containerd[1498]: time="2025-04-30T00:33:21.060907497Z" level=info msg="RemoveContainer for \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\"" Apr 30 00:33:21.063453 containerd[1498]: time="2025-04-30T00:33:21.063400712Z" level=info msg="RemoveContainer for \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\" returns successfully" Apr 30 00:33:21.063529 kubelet[2769]: I0430 00:33:21.063514 2769 scope.go:117] "RemoveContainer" containerID="4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe" Apr 30 00:33:21.064453 containerd[1498]: time="2025-04-30T00:33:21.064412379Z" level=info msg="RemoveContainer for \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\"" Apr 30 00:33:21.066560 containerd[1498]: time="2025-04-30T00:33:21.066538428Z" level=info msg="RemoveContainer for \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\" returns successfully" Apr 30 00:33:21.066653 kubelet[2769]: I0430 00:33:21.066633 2769 scope.go:117] "RemoveContainer" containerID="61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b" Apr 30 00:33:21.066842 containerd[1498]: time="2025-04-30T00:33:21.066811034Z" level=error msg="ContainerStatus for \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\": not found" Apr 30 00:33:21.066917 kubelet[2769]: E0430 00:33:21.066903 2769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\": not found" containerID="61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b" Apr 30 00:33:21.066945 kubelet[2769]: I0430 00:33:21.066921 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b"} err="failed to get container status \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"61766eb1cd9bac999e06b48f70076c9b0e6dc4c707e3a015450a69de53451e9b\": not found" Apr 30 00:33:21.066945 kubelet[2769]: I0430 00:33:21.066936 2769 scope.go:117] "RemoveContainer" containerID="7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915" Apr 30 00:33:21.067098 containerd[1498]: time="2025-04-30T00:33:21.067071985Z" level=error msg="ContainerStatus for \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\": not found" Apr 30 00:33:21.067173 kubelet[2769]: E0430 00:33:21.067160 2769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\": not found" containerID="7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915" Apr 30 00:33:21.067201 kubelet[2769]: I0430 00:33:21.067175 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915"} err="failed to get container status \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f8c12e7db06c65d2d814a4ff7fa4b446eadb1ad9c68e200c1513707b0340915\": not found" Apr 30 00:33:21.067201 kubelet[2769]: I0430 00:33:21.067186 2769 scope.go:117] "RemoveContainer" containerID="d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485" Apr 30 00:33:21.067303 containerd[1498]: time="2025-04-30T00:33:21.067278442Z" level=error msg="ContainerStatus for \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\": not found" Apr 30 00:33:21.067410 kubelet[2769]: E0430 00:33:21.067387 2769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\": not found" containerID="d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485" Apr 30 00:33:21.067465 kubelet[2769]: I0430 00:33:21.067408 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485"} err="failed to get container status \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7249f3ec7df1a2c98b8c6aec85204d533880db3e513880ffa832c4c4a665485\": not found" Apr 30 00:33:21.067493 kubelet[2769]: I0430 00:33:21.067464 2769 scope.go:117] "RemoveContainer" containerID="2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def" Apr 30 00:33:21.067594 containerd[1498]: time="2025-04-30T00:33:21.067568882Z" level=error msg="ContainerStatus for \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\": not found" Apr 30 00:33:21.067689 kubelet[2769]: E0430 00:33:21.067661 2769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\": not found" containerID="2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def" Apr 30 00:33:21.067689 kubelet[2769]: I0430 00:33:21.067679 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def"} err="failed to get container status \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e1fd780197f9eb47b9b1aef932c3275cd3f2916c531c1bae6b0a08322242def\": not found" Apr 30 00:33:21.067689 kubelet[2769]: I0430 00:33:21.067690 2769 scope.go:117] "RemoveContainer" containerID="4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe" Apr 30 00:33:21.068031 containerd[1498]: time="2025-04-30T00:33:21.067854881Z" level=error msg="ContainerStatus for \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\": not found" Apr 30 00:33:21.068083 kubelet[2769]: E0430 00:33:21.067950 2769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\": not found" containerID="4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe" Apr 30 00:33:21.068083 kubelet[2769]: I0430 00:33:21.067975 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe"} err="failed to get container status \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"4716543a28e92ed941e3e41c94a0e03cacf02d46c6f8e0ccc898208e770f22fe\": not found" Apr 30 00:33:21.462720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e-rootfs.mount: Deactivated successfully. Apr 30 00:33:21.462980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1-rootfs.mount: Deactivated successfully. Apr 30 00:33:21.463073 systemd[1]: var-lib-kubelet-pods-2036a4e1\x2da065\x2d4a97\x2db00d\x2d90f227ad1c4b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgfqxl.mount: Deactivated successfully. Apr 30 00:33:21.463135 systemd[1]: var-lib-kubelet-pods-0b88508a\x2df8bf\x2d4158\x2da4a8\x2dcd11af4aef0f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4gxtm.mount: Deactivated successfully. Apr 30 00:33:21.463198 systemd[1]: var-lib-kubelet-pods-0b88508a\x2df8bf\x2d4158\x2da4a8\x2dcd11af4aef0f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:33:21.463256 systemd[1]: var-lib-kubelet-pods-0b88508a\x2df8bf\x2d4158\x2da4a8\x2dcd11af4aef0f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:33:22.316834 kubelet[2769]: I0430 00:33:22.316767 2769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b88508a-f8bf-4158-a4a8-cd11af4aef0f" path="/var/lib/kubelet/pods/0b88508a-f8bf-4158-a4a8-cd11af4aef0f/volumes" Apr 30 00:33:22.317378 kubelet[2769]: I0430 00:33:22.317347 2769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2036a4e1-a065-4a97-b00d-90f227ad1c4b" path="/var/lib/kubelet/pods/2036a4e1-a065-4a97-b00d-90f227ad1c4b/volumes" Apr 30 00:33:22.515703 sshd[4346]: Connection closed by 139.178.89.65 port 49090 Apr 30 00:33:22.516301 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Apr 30 00:33:22.519855 systemd-logind[1474]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:33:22.520243 systemd[1]: sshd@20-37.27.9.63:22-139.178.89.65:49090.service: Deactivated successfully. Apr 30 00:33:22.522002 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:33:22.523251 systemd-logind[1474]: Removed session 20. Apr 30 00:33:22.686494 systemd[1]: Started sshd@21-37.27.9.63:22-139.178.89.65:49096.service - OpenSSH per-connection server daemon (139.178.89.65:49096). Apr 30 00:33:23.663399 sshd[4508]: Accepted publickey for core from 139.178.89.65 port 49096 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:33:23.664727 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:33:23.668994 systemd-logind[1474]: New session 21 of user core. Apr 30 00:33:23.672400 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:33:24.477480 kubelet[2769]: E0430 00:33:24.477438 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b88508a-f8bf-4158-a4a8-cd11af4aef0f" containerName="clean-cilium-state" Apr 30 00:33:24.477480 kubelet[2769]: E0430 00:33:24.477474 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b88508a-f8bf-4158-a4a8-cd11af4aef0f" containerName="cilium-agent" Apr 30 00:33:24.477480 kubelet[2769]: E0430 00:33:24.477483 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b88508a-f8bf-4158-a4a8-cd11af4aef0f" containerName="apply-sysctl-overwrites" Apr 30 00:33:24.478043 kubelet[2769]: E0430 00:33:24.477498 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b88508a-f8bf-4158-a4a8-cd11af4aef0f" containerName="mount-cgroup" Apr 30 00:33:24.478043 kubelet[2769]: E0430 00:33:24.477506 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b88508a-f8bf-4158-a4a8-cd11af4aef0f" containerName="mount-bpf-fs" Apr 30 00:33:24.478043 kubelet[2769]: E0430 00:33:24.477514 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2036a4e1-a065-4a97-b00d-90f227ad1c4b" containerName="cilium-operator" Apr 30 00:33:24.485413 kubelet[2769]: I0430 00:33:24.485343 2769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b88508a-f8bf-4158-a4a8-cd11af4aef0f" containerName="cilium-agent" Apr 30 00:33:24.485501 kubelet[2769]: I0430 00:33:24.485428 2769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2036a4e1-a065-4a97-b00d-90f227ad1c4b" containerName="cilium-operator" Apr 30 00:33:24.523939 systemd[1]: Created slice kubepods-burstable-pod0b44a6e8_a2c2_4729_922a_425b7bbb1c24.slice - libcontainer container kubepods-burstable-pod0b44a6e8_a2c2_4729_922a_425b7bbb1c24.slice. Apr 30 00:33:24.565157 kubelet[2769]: I0430 00:33:24.565118 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-etc-cni-netd\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565157 kubelet[2769]: I0430 00:33:24.565154 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-clustermesh-secrets\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565157 kubelet[2769]: I0430 00:33:24.565171 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-host-proc-sys-net\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565336 kubelet[2769]: I0430 00:33:24.565187 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnhxs\" (UniqueName: \"kubernetes.io/projected/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-kube-api-access-mnhxs\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565336 kubelet[2769]: I0430 00:33:24.565201 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-bpf-maps\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565336 kubelet[2769]: I0430 00:33:24.565213 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-lib-modules\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565336 kubelet[2769]: I0430 00:33:24.565237 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-host-proc-sys-kernel\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565336 kubelet[2769]: I0430 00:33:24.565263 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-hostproc\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565336 kubelet[2769]: I0430 00:33:24.565276 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-cilium-cgroup\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565445 kubelet[2769]: I0430 00:33:24.565287 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-cilium-ipsec-secrets\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565445 kubelet[2769]: I0430 00:33:24.565299 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-cilium-run\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565445 kubelet[2769]: I0430 00:33:24.565312 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-cilium-config-path\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565445 kubelet[2769]: I0430 00:33:24.565326 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-cni-path\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565445 kubelet[2769]: I0430 00:33:24.565336 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-xtables-lock\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.565445 kubelet[2769]: I0430 00:33:24.565357 2769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b44a6e8-a2c2-4729-922a-425b7bbb1c24-hubble-tls\") pod \"cilium-xkrr5\" (UID: \"0b44a6e8-a2c2-4729-922a-425b7bbb1c24\") " pod="kube-system/cilium-xkrr5" Apr 30 00:33:24.611406 sshd[4510]: Connection closed by 139.178.89.65 port 49096 Apr 30 00:33:24.612031 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Apr 30 00:33:24.615499 systemd-logind[1474]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:33:24.616106 systemd[1]: sshd@21-37.27.9.63:22-139.178.89.65:49096.service: Deactivated successfully. Apr 30 00:33:24.617834 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:33:24.618680 systemd-logind[1474]: Removed session 21. Apr 30 00:33:24.776866 systemd[1]: Started sshd@22-37.27.9.63:22-139.178.89.65:49100.service - OpenSSH per-connection server daemon (139.178.89.65:49100). Apr 30 00:33:24.830465 containerd[1498]: time="2025-04-30T00:33:24.830412920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xkrr5,Uid:0b44a6e8-a2c2-4729-922a-425b7bbb1c24,Namespace:kube-system,Attempt:0,}" Apr 30 00:33:24.850959 containerd[1498]: time="2025-04-30T00:33:24.850758144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:33:24.850959 containerd[1498]: time="2025-04-30T00:33:24.850805876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:33:24.850959 containerd[1498]: time="2025-04-30T00:33:24.850818370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:33:24.850959 containerd[1498]: time="2025-04-30T00:33:24.850886812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:33:24.872191 systemd[1]: Started cri-containerd-16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece.scope - libcontainer container 16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece. Apr 30 00:33:24.898359 containerd[1498]: time="2025-04-30T00:33:24.898324010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xkrr5,Uid:0b44a6e8-a2c2-4729-922a-425b7bbb1c24,Namespace:kube-system,Attempt:0,} returns sandbox id \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\"" Apr 30 00:33:24.903864 containerd[1498]: time="2025-04-30T00:33:24.903827620Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:33:24.914417 containerd[1498]: time="2025-04-30T00:33:24.914376185Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91e7499286d1b21e4be95b79d4dc53f5186133a8893bfb2a849128fb3800e394\"" Apr 30 00:33:24.914948 containerd[1498]: time="2025-04-30T00:33:24.914910554Z" level=info msg="StartContainer for \"91e7499286d1b21e4be95b79d4dc53f5186133a8893bfb2a849128fb3800e394\"" Apr 30 00:33:24.935146 systemd[1]: Started cri-containerd-91e7499286d1b21e4be95b79d4dc53f5186133a8893bfb2a849128fb3800e394.scope - libcontainer container 91e7499286d1b21e4be95b79d4dc53f5186133a8893bfb2a849128fb3800e394. Apr 30 00:33:24.954761 containerd[1498]: time="2025-04-30T00:33:24.954719255Z" level=info msg="StartContainer for \"91e7499286d1b21e4be95b79d4dc53f5186133a8893bfb2a849128fb3800e394\" returns successfully" Apr 30 00:33:24.964542 systemd[1]: cri-containerd-91e7499286d1b21e4be95b79d4dc53f5186133a8893bfb2a849128fb3800e394.scope: Deactivated successfully. Apr 30 00:33:24.992516 containerd[1498]: time="2025-04-30T00:33:24.992449979Z" level=info msg="shim disconnected" id=91e7499286d1b21e4be95b79d4dc53f5186133a8893bfb2a849128fb3800e394 namespace=k8s.io Apr 30 00:33:24.992516 containerd[1498]: time="2025-04-30T00:33:24.992497951Z" level=warning msg="cleaning up after shim disconnected" id=91e7499286d1b21e4be95b79d4dc53f5186133a8893bfb2a849128fb3800e394 namespace=k8s.io Apr 30 00:33:24.992516 containerd[1498]: time="2025-04-30T00:33:24.992505284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:25.039101 containerd[1498]: time="2025-04-30T00:33:25.037790215Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:33:25.061151 containerd[1498]: time="2025-04-30T00:33:25.061105105Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f02f829fae0a81466bae53d0ad818ec1d3b4bad66487c3da86739db7168ed4b\"" Apr 30 00:33:25.061989 containerd[1498]: time="2025-04-30T00:33:25.061941795Z" level=info msg="StartContainer for \"0f02f829fae0a81466bae53d0ad818ec1d3b4bad66487c3da86739db7168ed4b\"" Apr 30 00:33:25.082136 systemd[1]: Started cri-containerd-0f02f829fae0a81466bae53d0ad818ec1d3b4bad66487c3da86739db7168ed4b.scope - libcontainer container 0f02f829fae0a81466bae53d0ad818ec1d3b4bad66487c3da86739db7168ed4b. Apr 30 00:33:25.100000 containerd[1498]: time="2025-04-30T00:33:25.099952479Z" level=info msg="StartContainer for \"0f02f829fae0a81466bae53d0ad818ec1d3b4bad66487c3da86739db7168ed4b\" returns successfully" Apr 30 00:33:25.106828 systemd[1]: cri-containerd-0f02f829fae0a81466bae53d0ad818ec1d3b4bad66487c3da86739db7168ed4b.scope: Deactivated successfully. Apr 30 00:33:25.125623 containerd[1498]: time="2025-04-30T00:33:25.125249872Z" level=info msg="shim disconnected" id=0f02f829fae0a81466bae53d0ad818ec1d3b4bad66487c3da86739db7168ed4b namespace=k8s.io Apr 30 00:33:25.125623 containerd[1498]: time="2025-04-30T00:33:25.125489123Z" level=warning msg="cleaning up after shim disconnected" id=0f02f829fae0a81466bae53d0ad818ec1d3b4bad66487c3da86739db7168ed4b namespace=k8s.io Apr 30 00:33:25.125623 containerd[1498]: time="2025-04-30T00:33:25.125498481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:25.452549 kubelet[2769]: E0430 00:33:25.452509 2769 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:33:25.674210 systemd[1]: run-containerd-runc-k8s.io-16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece-runc.oGMqjI.mount: Deactivated successfully. Apr 30 00:33:25.746829 sshd[4525]: Accepted publickey for core from 139.178.89.65 port 49100 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:33:25.748352 sshd-session[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:33:25.753151 systemd-logind[1474]: New session 22 of user core. Apr 30 00:33:25.757164 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:33:26.042167 containerd[1498]: time="2025-04-30T00:33:26.042040156Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:33:26.060247 containerd[1498]: time="2025-04-30T00:33:26.060207102Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69000824b7520d4e894ac18dba73e14a2cf8da6a3a106a19b2fb7b5297275725\"" Apr 30 00:33:26.061081 containerd[1498]: time="2025-04-30T00:33:26.060764534Z" level=info msg="StartContainer for \"69000824b7520d4e894ac18dba73e14a2cf8da6a3a106a19b2fb7b5297275725\"" Apr 30 00:33:26.087145 systemd[1]: Started cri-containerd-69000824b7520d4e894ac18dba73e14a2cf8da6a3a106a19b2fb7b5297275725.scope - libcontainer container 69000824b7520d4e894ac18dba73e14a2cf8da6a3a106a19b2fb7b5297275725. Apr 30 00:33:26.106110 containerd[1498]: time="2025-04-30T00:33:26.106072543Z" level=info msg="StartContainer for \"69000824b7520d4e894ac18dba73e14a2cf8da6a3a106a19b2fb7b5297275725\" returns successfully" Apr 30 00:33:26.111731 systemd[1]: cri-containerd-69000824b7520d4e894ac18dba73e14a2cf8da6a3a106a19b2fb7b5297275725.scope: Deactivated successfully. Apr 30 00:33:26.132967 containerd[1498]: time="2025-04-30T00:33:26.132910803Z" level=info msg="shim disconnected" id=69000824b7520d4e894ac18dba73e14a2cf8da6a3a106a19b2fb7b5297275725 namespace=k8s.io Apr 30 00:33:26.132967 containerd[1498]: time="2025-04-30T00:33:26.132954627Z" level=warning msg="cleaning up after shim disconnected" id=69000824b7520d4e894ac18dba73e14a2cf8da6a3a106a19b2fb7b5297275725 namespace=k8s.io Apr 30 00:33:26.133160 containerd[1498]: time="2025-04-30T00:33:26.132961922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:26.415103 sshd[4690]: Connection closed by 139.178.89.65 port 49100 Apr 30 00:33:26.415743 sshd-session[4525]: pam_unix(sshd:session): session closed for user core Apr 30 00:33:26.418128 systemd[1]: sshd@22-37.27.9.63:22-139.178.89.65:49100.service: Deactivated successfully. Apr 30 00:33:26.420100 systemd-logind[1474]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:33:26.420456 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:33:26.421485 systemd-logind[1474]: Removed session 22. Apr 30 00:33:26.580761 systemd[1]: Started sshd@23-37.27.9.63:22-139.178.89.65:49106.service - OpenSSH per-connection server daemon (139.178.89.65:49106). Apr 30 00:33:26.659920 kubelet[2769]: I0430 00:33:26.659869 2769 setters.go:600] "Node became not ready" node="ci-4152-2-3-b-856bdfce49" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T00:33:26Z","lastTransitionTime":"2025-04-30T00:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 00:33:26.674144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69000824b7520d4e894ac18dba73e14a2cf8da6a3a106a19b2fb7b5297275725-rootfs.mount: Deactivated successfully. Apr 30 00:33:27.045381 containerd[1498]: time="2025-04-30T00:33:27.045129504Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:33:27.060919 containerd[1498]: time="2025-04-30T00:33:27.060853128Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"02e9bd8af0c8eff08d2d5d2ae572b358b53c8f671f9059daac1450e1aa89c2d0\"" Apr 30 00:33:27.064422 containerd[1498]: time="2025-04-30T00:33:27.062528743Z" level=info msg="StartContainer for \"02e9bd8af0c8eff08d2d5d2ae572b358b53c8f671f9059daac1450e1aa89c2d0\"" Apr 30 00:33:27.089150 systemd[1]: Started cri-containerd-02e9bd8af0c8eff08d2d5d2ae572b358b53c8f671f9059daac1450e1aa89c2d0.scope - libcontainer container 02e9bd8af0c8eff08d2d5d2ae572b358b53c8f671f9059daac1450e1aa89c2d0. Apr 30 00:33:27.106060 systemd[1]: cri-containerd-02e9bd8af0c8eff08d2d5d2ae572b358b53c8f671f9059daac1450e1aa89c2d0.scope: Deactivated successfully. Apr 30 00:33:27.108733 containerd[1498]: time="2025-04-30T00:33:27.108547282Z" level=info msg="StartContainer for \"02e9bd8af0c8eff08d2d5d2ae572b358b53c8f671f9059daac1450e1aa89c2d0\" returns successfully" Apr 30 00:33:27.121233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02e9bd8af0c8eff08d2d5d2ae572b358b53c8f671f9059daac1450e1aa89c2d0-rootfs.mount: Deactivated successfully. Apr 30 00:33:27.129376 containerd[1498]: time="2025-04-30T00:33:27.129329090Z" level=info msg="shim disconnected" id=02e9bd8af0c8eff08d2d5d2ae572b358b53c8f671f9059daac1450e1aa89c2d0 namespace=k8s.io Apr 30 00:33:27.129376 containerd[1498]: time="2025-04-30T00:33:27.129370179Z" level=warning msg="cleaning up after shim disconnected" id=02e9bd8af0c8eff08d2d5d2ae572b358b53c8f671f9059daac1450e1aa89c2d0 namespace=k8s.io Apr 30 00:33:27.129376 containerd[1498]: time="2025-04-30T00:33:27.129377574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:27.547338 sshd[4755]: Accepted publickey for core from 139.178.89.65 port 49106 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:33:27.548585 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:33:27.553063 systemd-logind[1474]: New session 23 of user core. Apr 30 00:33:27.559197 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:33:28.048576 containerd[1498]: time="2025-04-30T00:33:28.048485363Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:33:28.067286 containerd[1498]: time="2025-04-30T00:33:28.066885810Z" level=info msg="CreateContainer within sandbox \"16c9f60456480e9b473e3c5019551bb6bd97460e50d5f980031d4a7b45117ece\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"46e35fabc77ddc5a0937580e5e89d9f059dab32f4e5bcf1925dc5c8c5135ebde\"" Apr 30 00:33:28.071205 containerd[1498]: time="2025-04-30T00:33:28.068501649Z" level=info msg="StartContainer for \"46e35fabc77ddc5a0937580e5e89d9f059dab32f4e5bcf1925dc5c8c5135ebde\"" Apr 30 00:33:28.105119 systemd[1]: Started cri-containerd-46e35fabc77ddc5a0937580e5e89d9f059dab32f4e5bcf1925dc5c8c5135ebde.scope - libcontainer container 46e35fabc77ddc5a0937580e5e89d9f059dab32f4e5bcf1925dc5c8c5135ebde. Apr 30 00:33:28.147549 containerd[1498]: time="2025-04-30T00:33:28.147512787Z" level=info msg="StartContainer for \"46e35fabc77ddc5a0937580e5e89d9f059dab32f4e5bcf1925dc5c8c5135ebde\" returns successfully" Apr 30 00:33:28.620039 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 00:33:29.075997 kubelet[2769]: I0430 00:33:29.070751 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xkrr5" podStartSLOduration=5.070728464 podStartE2EDuration="5.070728464s" podCreationTimestamp="2025-04-30 00:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:33:29.070063174 +0000 UTC m=+338.830120837" watchObservedRunningTime="2025-04-30 00:33:29.070728464 +0000 UTC m=+338.830786138" Apr 30 00:33:30.403938 systemd[1]: run-containerd-runc-k8s.io-46e35fabc77ddc5a0937580e5e89d9f059dab32f4e5bcf1925dc5c8c5135ebde-runc.Lw7zaP.mount: Deactivated successfully. Apr 30 00:33:31.300663 systemd-networkd[1399]: lxc_health: Link UP Apr 30 00:33:31.310543 systemd-networkd[1399]: lxc_health: Gained carrier Apr 30 00:33:32.945247 systemd-networkd[1399]: lxc_health: Gained IPv6LL Apr 30 00:33:37.000506 sshd[4812]: Connection closed by 139.178.89.65 port 49106 Apr 30 00:33:37.001639 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Apr 30 00:33:37.004649 systemd[1]: sshd@23-37.27.9.63:22-139.178.89.65:49106.service: Deactivated successfully. Apr 30 00:33:37.006699 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:33:37.008609 systemd-logind[1474]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:33:37.010382 systemd-logind[1474]: Removed session 23. Apr 30 00:33:50.355334 containerd[1498]: time="2025-04-30T00:33:50.355266892Z" level=info msg="StopPodSandbox for \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\"" Apr 30 00:33:50.355804 containerd[1498]: time="2025-04-30T00:33:50.355392152Z" level=info msg="TearDown network for sandbox \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\" successfully" Apr 30 00:33:50.355804 containerd[1498]: time="2025-04-30T00:33:50.355440757Z" level=info msg="StopPodSandbox for \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\" returns successfully" Apr 30 00:33:50.355804 containerd[1498]: time="2025-04-30T00:33:50.355773838Z" level=info msg="RemovePodSandbox for \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\"" Apr 30 00:33:50.355804 containerd[1498]: time="2025-04-30T00:33:50.355797153Z" level=info msg="Forcibly stopping sandbox \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\"" Apr 30 00:33:50.355932 containerd[1498]: time="2025-04-30T00:33:50.355836990Z" level=info msg="TearDown network for sandbox \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\" successfully" Apr 30 00:33:50.360369 containerd[1498]: time="2025-04-30T00:33:50.360334950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:33:50.360451 containerd[1498]: time="2025-04-30T00:33:50.360382762Z" level=info msg="RemovePodSandbox \"93f047671f000fa91503ca5b61f083f48865c68e4a36228aeb16f35d1eed6c0e\" returns successfully" Apr 30 00:33:50.360911 containerd[1498]: time="2025-04-30T00:33:50.360672650Z" level=info msg="StopPodSandbox for \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\"" Apr 30 00:33:50.360911 containerd[1498]: time="2025-04-30T00:33:50.360722707Z" level=info msg="TearDown network for sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" successfully" Apr 30 00:33:50.360911 containerd[1498]: time="2025-04-30T00:33:50.360731865Z" level=info msg="StopPodSandbox for \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" returns successfully" Apr 30 00:33:50.361171 containerd[1498]: time="2025-04-30T00:33:50.361113409Z" level=info msg="RemovePodSandbox for \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\"" Apr 30 00:33:50.361305 containerd[1498]: time="2025-04-30T00:33:50.361278848Z" level=info msg="Forcibly stopping sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\"" Apr 30 00:33:50.361398 containerd[1498]: time="2025-04-30T00:33:50.361356277Z" level=info msg="TearDown network for sandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" successfully" Apr 30 00:33:50.363868 containerd[1498]: time="2025-04-30T00:33:50.363832826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:33:50.363930 containerd[1498]: time="2025-04-30T00:33:50.363879896Z" level=info msg="RemovePodSandbox \"5ef57b48e6d15a5060d36d2e6b2ca8d374cf1f99fa07bfac4203473b1e44b1a1\" returns successfully" Apr 30 00:33:52.589796 systemd[1]: cri-containerd-dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91.scope: Deactivated successfully. Apr 30 00:33:52.590372 systemd[1]: cri-containerd-dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91.scope: Consumed 4.325s CPU time, 17.6M memory peak, 0B memory swap peak. Apr 30 00:33:52.608711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91-rootfs.mount: Deactivated successfully. Apr 30 00:33:52.622615 containerd[1498]: time="2025-04-30T00:33:52.622535722Z" level=info msg="shim disconnected" id=dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91 namespace=k8s.io Apr 30 00:33:52.622615 containerd[1498]: time="2025-04-30T00:33:52.622604865Z" level=warning msg="cleaning up after shim disconnected" id=dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91 namespace=k8s.io Apr 30 00:33:52.622615 containerd[1498]: time="2025-04-30T00:33:52.622614844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:52.789165 kubelet[2769]: E0430 00:33:52.788574 2769 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34670->10.0.0.2:2379: read: connection timed out" Apr 30 00:33:52.797272 systemd[1]: cri-containerd-76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd.scope: Deactivated successfully. Apr 30 00:33:52.797583 systemd[1]: cri-containerd-76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd.scope: Consumed 1.274s CPU time, 15.9M memory peak, 0B memory swap peak. Apr 30 00:33:52.829823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd-rootfs.mount: Deactivated successfully. Apr 30 00:33:52.833630 containerd[1498]: time="2025-04-30T00:33:52.833365715Z" level=info msg="shim disconnected" id=76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd namespace=k8s.io Apr 30 00:33:52.833630 containerd[1498]: time="2025-04-30T00:33:52.833466830Z" level=warning msg="cleaning up after shim disconnected" id=76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd namespace=k8s.io Apr 30 00:33:52.833630 containerd[1498]: time="2025-04-30T00:33:52.833480736Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:33:53.104609 kubelet[2769]: I0430 00:33:53.104537 2769 scope.go:117] "RemoveContainer" containerID="dc7cc55b9568f0d62478d27ba63d3f6f88be5c9a659a908dd02f1a918cd49e91" Apr 30 00:33:53.106811 containerd[1498]: time="2025-04-30T00:33:53.106777749Z" level=info msg="CreateContainer within sandbox \"09e1f8e889c243ae25beac8bb00674cb2d885f3a52c4f80625755c593af03eda\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 00:33:53.107314 kubelet[2769]: I0430 00:33:53.107253 2769 scope.go:117] "RemoveContainer" containerID="76ab99652735e1c521804ffefc130d0efb273e97d320acf0a065b30c9835e8fd" Apr 30 00:33:53.109052 containerd[1498]: time="2025-04-30T00:33:53.108983637Z" level=info msg="CreateContainer within sandbox \"ed60ff404ced9b7c66a382d063e85ba95be4fbd6bdf1fbe6c7105bed2e9e49be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 00:33:53.120965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1696520970.mount: Deactivated successfully. Apr 30 00:33:53.125298 containerd[1498]: time="2025-04-30T00:33:53.125264799Z" level=info msg="CreateContainer within sandbox \"ed60ff404ced9b7c66a382d063e85ba95be4fbd6bdf1fbe6c7105bed2e9e49be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ddfe6d84dc6d4cfbcbebdafc2b412628d19ac653f9416414f2aa9fe4d6ce00ee\"" Apr 30 00:33:53.125639 containerd[1498]: time="2025-04-30T00:33:53.125614983Z" level=info msg="StartContainer for \"ddfe6d84dc6d4cfbcbebdafc2b412628d19ac653f9416414f2aa9fe4d6ce00ee\"" Apr 30 00:33:53.128029 containerd[1498]: time="2025-04-30T00:33:53.127967433Z" level=info msg="CreateContainer within sandbox \"09e1f8e889c243ae25beac8bb00674cb2d885f3a52c4f80625755c593af03eda\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a140a2e6a31bd117c0ced815d9aa42908c794b49f610fed82f40ebd980da8a36\"" Apr 30 00:33:53.128805 containerd[1498]: time="2025-04-30T00:33:53.128508184Z" level=info msg="StartContainer for \"a140a2e6a31bd117c0ced815d9aa42908c794b49f610fed82f40ebd980da8a36\"" Apr 30 00:33:53.152135 systemd[1]: Started cri-containerd-ddfe6d84dc6d4cfbcbebdafc2b412628d19ac653f9416414f2aa9fe4d6ce00ee.scope - libcontainer container ddfe6d84dc6d4cfbcbebdafc2b412628d19ac653f9416414f2aa9fe4d6ce00ee. Apr 30 00:33:53.160166 systemd[1]: Started cri-containerd-a140a2e6a31bd117c0ced815d9aa42908c794b49f610fed82f40ebd980da8a36.scope - libcontainer container a140a2e6a31bd117c0ced815d9aa42908c794b49f610fed82f40ebd980da8a36. Apr 30 00:33:53.189604 containerd[1498]: time="2025-04-30T00:33:53.189563915Z" level=info msg="StartContainer for \"ddfe6d84dc6d4cfbcbebdafc2b412628d19ac653f9416414f2aa9fe4d6ce00ee\" returns successfully" Apr 30 00:33:53.200242 containerd[1498]: time="2025-04-30T00:33:53.200213203Z" level=info msg="StartContainer for \"a140a2e6a31bd117c0ced815d9aa42908c794b49f610fed82f40ebd980da8a36\" returns successfully" Apr 30 00:33:56.883081 kubelet[2769]: I0430 00:33:56.883002 2769 status_manager.go:851] "Failed to get status for pod" podUID="2b6c0edf66843b2bbae5344a8079bd14" pod="kube-system/kube-controller-manager-ci-4152-2-3-b-856bdfce49" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34596->10.0.0.2:2379: read: connection timed out" Apr 30 00:33:57.718106 kubelet[2769]: E0430 00:33:57.716776 2769 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34478->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-3-b-856bdfce49.183af1664cd746c5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-3-b-856bdfce49,UID:4db39075849125ceb817e43294f12923,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-b-856bdfce49,},FirstTimestamp:2025-04-30 00:33:47.237623493 +0000 UTC m=+356.997681167,LastTimestamp:2025-04-30 00:33:47.237623493 +0000 UTC m=+356.997681167,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-b-856bdfce49,}"